00:00:00.001 Started by upstream project "autotest-per-patch" build number 124198 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.312 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.312 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.105 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.117 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.128 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:08.128 > git config core.sparsecheckout # timeout=10 00:00:08.138 > git read-tree -mu HEAD # timeout=10 00:00:08.154 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:08.172 Commit message: "pool: fixes for VisualBuild class" 00:00:08.172 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:08.260 [Pipeline] Start of Pipeline 00:00:08.272 [Pipeline] library 00:00:08.273 Loading library shm_lib@master 00:00:08.273 Library shm_lib@master is cached. Copying from home. 00:00:08.285 [Pipeline] node 00:00:08.295 Running on CYP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.297 [Pipeline] { 00:00:08.305 [Pipeline] catchError 00:00:08.306 [Pipeline] { 00:00:08.319 [Pipeline] wrap 00:00:08.330 [Pipeline] { 00:00:08.336 [Pipeline] stage 00:00:08.338 [Pipeline] { (Prologue) 00:00:08.526 [Pipeline] sh 00:00:08.811 + logger -p user.info -t JENKINS-CI 00:00:08.825 [Pipeline] echo 00:00:08.826 Node: CYP6 00:00:08.833 [Pipeline] sh 00:00:09.129 [Pipeline] setCustomBuildProperty 00:00:09.138 [Pipeline] echo 00:00:09.139 Cleanup processes 00:00:09.142 [Pipeline] sh 00:00:09.423 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.423 1216371 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.436 [Pipeline] sh 00:00:09.725 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.725 ++ grep -v 'sudo pgrep' 00:00:09.725 ++ awk '{print $1}' 00:00:09.725 + sudo kill -9 00:00:09.725 + true 00:00:09.737 [Pipeline] cleanWs 00:00:09.746 [WS-CLEANUP] Deleting project workspace... 00:00:09.746 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.752 [WS-CLEANUP] done 00:00:09.756 [Pipeline] setCustomBuildProperty 00:00:09.768 [Pipeline] sh 00:00:10.048 + sudo git config --global --replace-all safe.directory '*' 00:00:10.113 [Pipeline] nodesByLabel 00:00:10.115 Found a total of 2 nodes with the 'sorcerer' label 00:00:10.126 [Pipeline] httpRequest 00:00:10.131 HttpMethod: GET 00:00:10.132 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.135 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.143 Response Code: HTTP/1.1 200 OK 00:00:10.144 Success: Status code 200 is in the accepted range: 200,404 00:00:10.145 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:20.975 [Pipeline] sh 00:00:21.260 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:21.279 [Pipeline] httpRequest 00:00:21.285 HttpMethod: GET 00:00:21.286 URL: http://10.211.164.101/packages/spdk_3b7525570fecfbbc8d64b275d4c0c9dbe5b69225.tar.gz 00:00:21.287 Sending request to url: http://10.211.164.101/packages/spdk_3b7525570fecfbbc8d64b275d4c0c9dbe5b69225.tar.gz 00:00:21.295 Response Code: HTTP/1.1 200 OK 00:00:21.296 Success: Status code 200 is in the accepted range: 200,404 00:00:21.296 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3b7525570fecfbbc8d64b275d4c0c9dbe5b69225.tar.gz 00:01:13.700 [Pipeline] sh 00:01:14.048 + tar --no-same-owner -xf spdk_3b7525570fecfbbc8d64b275d4c0c9dbe5b69225.tar.gz 00:01:17.357 [Pipeline] sh 00:01:17.642 + git -C spdk log --oneline -n5 00:01:17.642 3b7525570 nvme: Get PI format for Extended LBA format 00:01:17.642 1e8a0c991 nvme: Get NVM Identify Namespace Data for Extended LBA Format 00:01:17.642 493b11851 nvme: Use Host Behavior Support Feature to enable LBA Format Extension 00:01:17.642 e2612f201 nvme: Factor out getting ZNS Identify Namespace Data 00:01:17.642 93e13a7a6 nvme_spec: Add IOCS Identify Namespace Data for NVM command set 00:01:17.654 [Pipeline] } 00:01:17.671 [Pipeline] // stage 00:01:17.679 [Pipeline] stage 00:01:17.682 [Pipeline] { (Prepare) 00:01:17.695 [Pipeline] writeFile 00:01:17.707 [Pipeline] sh 00:01:17.987 + logger -p user.info -t JENKINS-CI 00:01:17.999 [Pipeline] sh 00:01:18.282 + logger -p user.info -t JENKINS-CI 00:01:18.295 [Pipeline] sh 00:01:18.578 + cat autorun-spdk.conf 00:01:18.578 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.578 SPDK_TEST_NVMF=1 00:01:18.578 SPDK_TEST_NVME_CLI=1 00:01:18.578 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.578 SPDK_TEST_NVMF_NICS=e810 00:01:18.578 SPDK_TEST_VFIOUSER=1 00:01:18.578 SPDK_RUN_UBSAN=1 00:01:18.578 NET_TYPE=phy 00:01:18.585 RUN_NIGHTLY=0 00:01:18.591 [Pipeline] readFile 00:01:18.615 [Pipeline] withEnv 00:01:18.617 [Pipeline] { 00:01:18.632 [Pipeline] sh 00:01:18.917 + set -ex 00:01:18.917 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:18.917 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.917 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.917 ++ SPDK_TEST_NVMF=1 00:01:18.917 ++ SPDK_TEST_NVME_CLI=1 00:01:18.917 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.917 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.917 ++ SPDK_TEST_VFIOUSER=1 00:01:18.917 ++ SPDK_RUN_UBSAN=1 00:01:18.917 ++ NET_TYPE=phy 00:01:18.917 ++ RUN_NIGHTLY=0 00:01:18.917 + case $SPDK_TEST_NVMF_NICS in 00:01:18.917 + DRIVERS=ice 00:01:18.917 + [[ tcp == \r\d\m\a ]] 00:01:18.917 + [[ -n ice ]] 00:01:18.917 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.917 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.917 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:18.917 rmmod: ERROR: Module irdma is not currently loaded 00:01:18.917 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.917 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.917 + true 00:01:18.917 + for D in $DRIVERS 00:01:18.917 + sudo modprobe ice 00:01:18.917 + exit 0 00:01:18.927 [Pipeline] } 00:01:18.944 [Pipeline] // withEnv 00:01:18.949 [Pipeline] } 00:01:18.965 [Pipeline] // stage 00:01:18.974 [Pipeline] catchError 00:01:18.976 [Pipeline] { 00:01:18.990 [Pipeline] timeout 00:01:18.991 Timeout set to expire in 50 min 00:01:18.992 [Pipeline] { 00:01:19.008 [Pipeline] stage 00:01:19.010 [Pipeline] { (Tests) 00:01:19.025 [Pipeline] sh 00:01:19.311 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.311 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.311 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.311 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:19.311 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.311 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.311 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:19.311 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.311 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.311 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.311 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:19.311 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.311 + source /etc/os-release 00:01:19.311 ++ NAME='Fedora Linux' 00:01:19.311 ++ VERSION='38 (Cloud Edition)' 00:01:19.311 ++ ID=fedora 00:01:19.311 ++ VERSION_ID=38 00:01:19.311 ++ VERSION_CODENAME= 00:01:19.311 ++ PLATFORM_ID=platform:f38 00:01:19.311 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.311 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.311 ++ LOGO=fedora-logo-icon 00:01:19.311 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.311 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.311 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.311 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.311 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.311 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.311 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.311 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.311 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.311 ++ SUPPORT_END=2024-05-14 00:01:19.311 ++ VARIANT='Cloud Edition' 00:01:19.311 ++ VARIANT_ID=cloud 00:01:19.311 + uname -a 00:01:19.311 Linux spdk-CYP-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.311 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.609 Hugepages 00:01:22.609 node hugesize free / total 00:01:22.609 node0 1048576kB 0 / 0 00:01:22.609 node0 2048kB 0 / 0 00:01:22.609 node1 1048576kB 0 / 0 00:01:22.609 node1 2048kB 0 / 0 00:01:22.609 00:01:22.609 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.609 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:22.609 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:22.609 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:22.609 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:22.609 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:22.609 + rm -f /tmp/spdk-ld-path 00:01:22.609 + source autorun-spdk.conf 00:01:22.609 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.609 ++ SPDK_TEST_NVMF=1 00:01:22.609 ++ SPDK_TEST_NVME_CLI=1 00:01:22.609 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.609 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.609 ++ SPDK_TEST_VFIOUSER=1 00:01:22.609 ++ SPDK_RUN_UBSAN=1 00:01:22.609 ++ NET_TYPE=phy 00:01:22.609 ++ RUN_NIGHTLY=0 00:01:22.609 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.609 + [[ -n '' ]] 00:01:22.609 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.609 + for M in /var/spdk/build-*-manifest.txt 00:01:22.609 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.609 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.609 + for M in /var/spdk/build-*-manifest.txt 00:01:22.609 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.609 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.870 ++ uname 00:01:22.870 + [[ Linux == \L\i\n\u\x ]] 00:01:22.870 + sudo dmesg -T 00:01:22.870 + sudo dmesg --clear 00:01:22.870 + dmesg_pid=1217464 00:01:22.870 + [[ Fedora Linux == FreeBSD ]] 00:01:22.870 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.870 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.870 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.870 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.871 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.871 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.871 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.871 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.871 + sudo dmesg -Tw 00:01:22.871 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.871 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.871 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.871 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.871 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.871 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.871 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.871 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.871 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.871 Test configuration: 00:01:22.871 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.871 SPDK_TEST_NVMF=1 00:01:22.871 SPDK_TEST_NVME_CLI=1 00:01:22.871 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.871 SPDK_TEST_NVMF_NICS=e810 00:01:22.871 SPDK_TEST_VFIOUSER=1 00:01:22.871 SPDK_RUN_UBSAN=1 00:01:22.871 NET_TYPE=phy 00:01:22.871 RUN_NIGHTLY=0 11:08:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.871 11:08:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.871 11:08:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.871 11:08:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.871 11:08:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.871 11:08:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.871 11:08:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.871 11:08:19 -- paths/export.sh@5 -- $ export PATH 00:01:22.871 11:08:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.871 11:08:20 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.871 11:08:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:22.871 11:08:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718010500.XXXXXX 00:01:22.871 11:08:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718010500.B56TAS 00:01:22.871 11:08:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:22.871 11:08:20 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:22.871 11:08:20 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:22.871 11:08:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.871 11:08:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.871 11:08:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:22.871 11:08:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:22.871 11:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.871 11:08:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:22.871 11:08:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:22.871 11:08:20 -- pm/common@17 -- $ local monitor 00:01:22.871 11:08:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.871 11:08:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.871 11:08:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.871 11:08:20 -- pm/common@21 -- $ date +%s 00:01:22.871 11:08:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.871 11:08:20 -- pm/common@25 -- $ sleep 1 00:01:22.871 11:08:20 -- pm/common@21 -- $ date +%s 00:01:22.871 11:08:20 -- pm/common@21 -- $ date +%s 00:01:22.871 11:08:20 -- pm/common@21 -- $ date +%s 00:01:22.871 11:08:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010500 00:01:22.871 11:08:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010500 00:01:22.871 11:08:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010500 00:01:22.871 11:08:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010500 00:01:22.871 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010500_collect-vmstat.pm.log 00:01:23.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010500_collect-cpu-load.pm.log 00:01:23.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010500_collect-cpu-temp.pm.log 00:01:23.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010500_collect-bmc-pm.bmc.pm.log 00:01:24.074 11:08:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:24.074 11:08:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.074 11:08:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.074 11:08:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.074 11:08:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.074 Mon Jun 10 09:08:21 AM UTC 2024 00:01:24.074 11:08:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.074 v24.09-pre-58-g3b7525570 00:01:24.074 11:08:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.074 11:08:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.074 11:08:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.074 11:08:21 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:24.074 11:08:21 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:24.074 11:08:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.074 ************************************ 00:01:24.074 START TEST ubsan 00:01:24.074 ************************************ 00:01:24.074 11:08:21 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:24.074 using ubsan 00:01:24.074 00:01:24.074 real 0m0.001s 00:01:24.074 user 0m0.000s 00:01:24.074 sys 0m0.000s 00:01:24.074 11:08:21 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:24.074 11:08:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.074 ************************************ 00:01:24.074 END TEST ubsan 00:01:24.074 ************************************ 00:01:24.074 11:08:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.074 11:08:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.074 11:08:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.074 11:08:21 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.335 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.335 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.596 Using 'verbs' RDMA provider 00:01:40.441 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:52.673 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.673 Creating mk/config.mk...done. 00:01:52.673 Creating mk/cc.flags.mk...done. 00:01:52.673 Type 'make' to build. 00:01:52.673 11:08:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:52.673 11:08:49 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:52.673 11:08:49 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:52.673 11:08:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.673 ************************************ 00:01:52.673 START TEST make 00:01:52.673 ************************************ 00:01:52.673 11:08:49 make -- common/autotest_common.sh@1124 -- $ make -j128 00:01:52.673 make[1]: Nothing to be done for 'all'. 00:01:54.048 The Meson build system 00:01:54.048 Version: 1.3.1 00:01:54.048 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:54.048 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.048 Build type: native build 00:01:54.048 Project name: libvfio-user 00:01:54.048 Project version: 0.0.1 00:01:54.048 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.048 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.048 Host machine cpu family: x86_64 00:01:54.048 Host machine cpu: x86_64 00:01:54.048 Run-time dependency threads found: YES 00:01:54.048 Library dl found: YES 00:01:54.048 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.048 Run-time dependency json-c found: YES 0.17 00:01:54.048 Run-time dependency cmocka found: YES 1.1.7 00:01:54.048 Program pytest-3 found: NO 00:01:54.048 Program flake8 found: NO 00:01:54.048 Program misspell-fixer found: NO 00:01:54.048 Program restructuredtext-lint found: NO 00:01:54.048 Program valgrind found: YES (/usr/bin/valgrind) 00:01:54.048 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.048 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.048 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.048 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.048 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:54.048 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:54.048 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.048 Build targets in project: 8 00:01:54.048 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:54.048 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:54.048 00:01:54.048 libvfio-user 0.0.1 00:01:54.048 00:01:54.048 User defined options 00:01:54.048 buildtype : debug 00:01:54.048 default_library: shared 00:01:54.048 libdir : /usr/local/lib 00:01:54.048 00:01:54.048 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.355 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.355 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:54.355 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:54.355 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:54.355 [4/37] Compiling C object samples/null.p/null.c.o 00:01:54.355 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:54.355 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:54.355 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:54.355 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:54.355 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:54.355 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:54.355 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:54.355 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:54.355 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:54.355 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:54.355 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:54.355 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:54.355 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:54.355 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:54.355 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:54.355 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:54.355 [21/37] Compiling C object samples/server.p/server.c.o 00:01:54.355 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:54.355 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:54.355 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:54.355 [25/37] Compiling C object samples/client.p/client.c.o 00:01:54.355 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:54.355 [27/37] Linking target samples/client 00:01:54.633 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:54.633 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:54.633 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:54.633 [31/37] Linking target test/unit_tests 00:01:54.633 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.633 [33/37] Linking target samples/null 00:01:54.633 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:54.633 [35/37] Linking target samples/gpio-pci-idio-16 00:01:54.633 [36/37] Linking target samples/server 00:01:54.633 [37/37] Linking target samples/lspci 00:01:54.633 INFO: autodetecting backend as ninja 00:01:54.633 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.633 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.204 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:55.204 ninja: no work to do. 00:02:01.823 The Meson build system 00:02:01.823 Version: 1.3.1 00:02:01.823 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:01.823 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:01.823 Build type: native build 00:02:01.823 Program cat found: YES (/usr/bin/cat) 00:02:01.823 Project name: DPDK 00:02:01.823 Project version: 24.03.0 00:02:01.823 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:01.823 C linker for the host machine: cc ld.bfd 2.39-16 00:02:01.823 Host machine cpu family: x86_64 00:02:01.823 Host machine cpu: x86_64 00:02:01.823 Message: ## Building in Developer Mode ## 00:02:01.823 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:01.823 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:01.823 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:01.823 Program python3 found: YES (/usr/bin/python3) 00:02:01.823 Program cat found: YES (/usr/bin/cat) 00:02:01.823 Compiler for C supports arguments -march=native: YES 00:02:01.823 Checking for size of "void *" : 8 00:02:01.823 Checking for size of "void *" : 8 (cached) 00:02:01.823 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:01.824 Library m found: YES 00:02:01.824 Library numa found: YES 00:02:01.824 Has header "numaif.h" : YES 00:02:01.824 Library fdt found: NO 00:02:01.824 Library execinfo found: NO 00:02:01.824 Has header "execinfo.h" : YES 00:02:01.824 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:01.824 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:01.824 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:01.824 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:01.824 Run-time dependency openssl found: YES 3.0.9 00:02:01.824 Run-time dependency libpcap found: YES 1.10.4 00:02:01.824 Has header "pcap.h" with dependency libpcap: YES 00:02:01.824 Compiler for C supports arguments -Wcast-qual: YES 00:02:01.824 Compiler for C supports arguments -Wdeprecated: YES 00:02:01.824 Compiler for C supports arguments -Wformat: YES 00:02:01.824 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:01.824 Compiler for C supports arguments -Wformat-security: NO 00:02:01.824 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.824 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:01.824 Compiler for C supports arguments -Wnested-externs: YES 00:02:01.824 Compiler for C supports arguments -Wold-style-definition: YES 00:02:01.824 Compiler for C supports arguments -Wpointer-arith: YES 00:02:01.824 Compiler for C supports arguments -Wsign-compare: YES 00:02:01.824 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:01.824 Compiler for C supports arguments -Wundef: YES 00:02:01.824 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.824 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:01.824 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:01.824 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.824 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:01.824 Program objdump found: YES (/usr/bin/objdump) 00:02:01.824 Compiler for C supports arguments -mavx512f: YES 00:02:01.824 Checking if "AVX512 checking" compiles: YES 00:02:01.824 Fetching value of define "__SSE4_2__" : 1 00:02:01.824 Fetching value of define "__AES__" : 1 00:02:01.824 Fetching value of define "__AVX__" : 1 00:02:01.824 Fetching value of define "__AVX2__" : 1 00:02:01.824 Fetching value of define "__AVX512BW__" : 1 00:02:01.824 Fetching value of define "__AVX512CD__" : 1 00:02:01.824 Fetching value of define "__AVX512DQ__" : 1 00:02:01.824 Fetching value of define "__AVX512F__" : 1 00:02:01.824 Fetching value of define "__AVX512VL__" : 1 00:02:01.824 Fetching value of define "__PCLMUL__" : 1 00:02:01.824 Fetching value of define "__RDRND__" : 1 00:02:01.824 Fetching value of define "__RDSEED__" : 1 00:02:01.824 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:01.824 Fetching value of define "__znver1__" : (undefined) 00:02:01.824 Fetching value of define "__znver2__" : (undefined) 00:02:01.824 Fetching value of define "__znver3__" : (undefined) 00:02:01.824 Fetching value of define "__znver4__" : (undefined) 00:02:01.824 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:01.824 Message: lib/log: Defining dependency "log" 00:02:01.824 Message: lib/kvargs: Defining dependency "kvargs" 00:02:01.824 Message: lib/telemetry: Defining dependency "telemetry" 00:02:01.824 Checking for function "getentropy" : NO 00:02:01.824 Message: lib/eal: Defining dependency "eal" 00:02:01.824 Message: lib/ring: Defining dependency "ring" 00:02:01.824 Message: lib/rcu: Defining dependency "rcu" 00:02:01.824 Message: lib/mempool: Defining dependency "mempool" 00:02:01.824 Message: lib/mbuf: Defining dependency "mbuf" 00:02:01.824 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:01.824 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.824 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.824 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:01.824 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:01.824 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:01.824 Compiler for C supports arguments -mpclmul: YES 00:02:01.824 Compiler for C supports arguments -maes: YES 00:02:01.824 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.824 Compiler for C supports arguments -mavx512bw: YES 00:02:01.824 Compiler for C supports arguments -mavx512dq: YES 00:02:01.824 Compiler for C supports arguments -mavx512vl: YES 00:02:01.824 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:01.824 Compiler for C supports arguments -mavx2: YES 00:02:01.824 Compiler for C supports arguments -mavx: YES 00:02:01.824 Message: lib/net: Defining dependency "net" 00:02:01.824 Message: lib/meter: Defining dependency "meter" 00:02:01.824 Message: lib/ethdev: Defining dependency "ethdev" 00:02:01.824 Message: lib/pci: Defining dependency "pci" 00:02:01.824 Message: lib/cmdline: Defining dependency "cmdline" 00:02:01.824 Message: lib/hash: Defining dependency "hash" 00:02:01.824 Message: lib/timer: Defining dependency "timer" 00:02:01.824 Message: lib/compressdev: Defining dependency "compressdev" 00:02:01.824 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:01.824 Message: lib/dmadev: Defining dependency "dmadev" 00:02:01.824 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:01.824 Message: lib/power: Defining dependency "power" 00:02:01.824 Message: lib/reorder: Defining dependency "reorder" 00:02:01.824 Message: lib/security: Defining dependency "security" 00:02:01.824 Has header "linux/userfaultfd.h" : YES 00:02:01.824 Has header "linux/vduse.h" : YES 00:02:01.824 Message: lib/vhost: Defining dependency "vhost" 00:02:01.824 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.824 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.824 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.824 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.824 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:01.824 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:01.824 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:01.824 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:01.824 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:01.824 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:01.824 Program doxygen found: YES (/usr/bin/doxygen) 00:02:01.824 Configuring doxy-api-html.conf using configuration 00:02:01.824 Configuring doxy-api-man.conf using configuration 00:02:01.824 Program mandb found: YES (/usr/bin/mandb) 00:02:01.824 Program sphinx-build found: NO 00:02:01.824 Configuring rte_build_config.h using configuration 00:02:01.824 Message: 00:02:01.824 ================= 00:02:01.824 Applications Enabled 00:02:01.824 ================= 00:02:01.824 00:02:01.824 apps: 00:02:01.824 00:02:01.824 00:02:01.824 Message: 00:02:01.824 ================= 00:02:01.824 Libraries Enabled 00:02:01.824 ================= 00:02:01.824 00:02:01.824 libs: 00:02:01.824 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:01.824 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:01.824 cryptodev, dmadev, power, reorder, security, vhost, 00:02:01.824 00:02:01.824 Message: 00:02:01.824 =============== 00:02:01.824 Drivers Enabled 00:02:01.824 =============== 00:02:01.824 00:02:01.824 common: 00:02:01.824 00:02:01.824 bus: 00:02:01.824 pci, vdev, 00:02:01.824 mempool: 00:02:01.824 ring, 00:02:01.824 dma: 00:02:01.824 00:02:01.824 net: 00:02:01.824 00:02:01.824 crypto: 00:02:01.824 00:02:01.824 compress: 00:02:01.824 00:02:01.824 vdpa: 00:02:01.824 00:02:01.824 00:02:01.824 Message: 00:02:01.824 ================= 00:02:01.824 Content Skipped 00:02:01.824 ================= 00:02:01.824 00:02:01.824 apps: 00:02:01.824 dumpcap: explicitly disabled via build config 00:02:01.824 graph: explicitly disabled via build config 00:02:01.824 pdump: explicitly disabled via build config 00:02:01.824 proc-info: explicitly disabled via build config 00:02:01.824 test-acl: explicitly disabled via build config 00:02:01.824 test-bbdev: explicitly disabled via build config 00:02:01.824 test-cmdline: explicitly disabled via build config 00:02:01.824 test-compress-perf: explicitly disabled via build config 00:02:01.824 test-crypto-perf: explicitly disabled via build config 00:02:01.824 test-dma-perf: explicitly disabled via build config 00:02:01.824 test-eventdev: explicitly disabled via build config 00:02:01.824 test-fib: explicitly disabled via build config 00:02:01.824 test-flow-perf: explicitly disabled via build config 00:02:01.824 test-gpudev: explicitly disabled via build config 00:02:01.824 test-mldev: explicitly disabled via build config 00:02:01.824 test-pipeline: explicitly disabled via build config 00:02:01.824 test-pmd: explicitly disabled via build config 00:02:01.824 test-regex: explicitly disabled via build config 00:02:01.824 test-sad: explicitly disabled via build config 00:02:01.824 test-security-perf: explicitly disabled via build config 00:02:01.824 00:02:01.824 libs: 00:02:01.824 argparse: explicitly disabled via build config 00:02:01.824 metrics: explicitly disabled via build config 00:02:01.824 acl: explicitly disabled via build config 00:02:01.824 bbdev: explicitly disabled via build config 00:02:01.824 bitratestats: explicitly disabled via build config 00:02:01.824 bpf: explicitly disabled via build config 00:02:01.824 cfgfile: explicitly disabled via build config 00:02:01.824 distributor: explicitly disabled via build config 00:02:01.824 efd: explicitly disabled via build config 00:02:01.824 eventdev: explicitly disabled via build config 00:02:01.824 dispatcher: explicitly disabled via build config 00:02:01.824 gpudev: explicitly disabled via build config 00:02:01.824 gro: explicitly disabled via build config 00:02:01.824 gso: explicitly disabled via build config 00:02:01.824 ip_frag: explicitly disabled via build config 00:02:01.824 jobstats: explicitly disabled via build config 00:02:01.824 latencystats: explicitly disabled via build config 00:02:01.824 lpm: explicitly disabled via build config 00:02:01.825 member: explicitly disabled via build config 00:02:01.825 pcapng: explicitly disabled via build config 00:02:01.825 rawdev: explicitly disabled via build config 00:02:01.825 regexdev: explicitly disabled via build config 00:02:01.825 mldev: explicitly disabled via build config 00:02:01.825 rib: explicitly disabled via build config 00:02:01.825 sched: explicitly disabled via build config 00:02:01.825 stack: explicitly disabled via build config 00:02:01.825 ipsec: explicitly disabled via build config 00:02:01.825 pdcp: explicitly disabled via build config 00:02:01.825 fib: explicitly disabled via build config 00:02:01.825 port: explicitly disabled via build config 00:02:01.825 pdump: explicitly disabled via build config 00:02:01.825 table: explicitly disabled via build config 00:02:01.825 pipeline: explicitly disabled via build config 00:02:01.825 graph: explicitly disabled via build config 00:02:01.825 node: explicitly disabled via build config 00:02:01.825 00:02:01.825 drivers: 00:02:01.825 common/cpt: not in enabled drivers build config 00:02:01.825 common/dpaax: not in enabled drivers build config 00:02:01.825 common/iavf: not in enabled drivers build config 00:02:01.825 common/idpf: not in enabled drivers build config 00:02:01.825 common/ionic: not in enabled drivers build config 00:02:01.825 common/mvep: not in enabled drivers build config 00:02:01.825 common/octeontx: not in enabled drivers build config 00:02:01.825 bus/auxiliary: not in enabled drivers build config 00:02:01.825 bus/cdx: not in enabled drivers build config 00:02:01.825 bus/dpaa: not in enabled drivers build config 00:02:01.825 bus/fslmc: not in enabled drivers build config 00:02:01.825 bus/ifpga: not in enabled drivers build config 00:02:01.825 bus/platform: not in enabled drivers build config 00:02:01.825 bus/uacce: not in enabled drivers build config 00:02:01.825 bus/vmbus: not in enabled drivers build config 00:02:01.825 common/cnxk: not in enabled drivers build config 00:02:01.825 common/mlx5: not in enabled drivers build config 00:02:01.825 common/nfp: not in enabled drivers build config 00:02:01.825 common/nitrox: not in enabled drivers build config 00:02:01.825 common/qat: not in enabled drivers build config 00:02:01.825 common/sfc_efx: not in enabled drivers build config 00:02:01.825 mempool/bucket: not in enabled drivers build config 00:02:01.825 mempool/cnxk: not in enabled drivers build config 00:02:01.825 mempool/dpaa: not in enabled drivers build config 00:02:01.825 mempool/dpaa2: not in enabled drivers build config 00:02:01.825 mempool/octeontx: not in enabled drivers build config 00:02:01.825 mempool/stack: not in enabled drivers build config 00:02:01.825 dma/cnxk: not in enabled drivers build config 00:02:01.825 dma/dpaa: not in enabled drivers build config 00:02:01.825 dma/dpaa2: not in enabled drivers build config 00:02:01.825 dma/hisilicon: not in enabled drivers build config 00:02:01.825 dma/idxd: not in enabled drivers build config 00:02:01.825 dma/ioat: not in enabled drivers build config 00:02:01.825 dma/skeleton: not in enabled drivers build config 00:02:01.825 net/af_packet: not in enabled drivers build config 00:02:01.825 net/af_xdp: not in enabled drivers build config 00:02:01.825 net/ark: not in enabled drivers build config 00:02:01.825 net/atlantic: not in enabled drivers build config 00:02:01.825 net/avp: not in enabled drivers build config 00:02:01.825 net/axgbe: not in enabled drivers build config 00:02:01.825 net/bnx2x: not in enabled drivers build config 00:02:01.825 net/bnxt: not in enabled drivers build config 00:02:01.825 net/bonding: not in enabled drivers build config 00:02:01.825 net/cnxk: not in enabled drivers build config 00:02:01.825 net/cpfl: not in enabled drivers build config 00:02:01.825 net/cxgbe: not in enabled drivers build config 00:02:01.825 net/dpaa: not in enabled drivers build config 00:02:01.825 net/dpaa2: not in enabled drivers build config 00:02:01.825 net/e1000: not in enabled drivers build config 00:02:01.825 net/ena: not in enabled drivers build config 00:02:01.825 net/enetc: not in enabled drivers build config 00:02:01.825 net/enetfec: not in enabled drivers build config 00:02:01.825 net/enic: not in enabled drivers build config 00:02:01.825 net/failsafe: not in enabled drivers build config 00:02:01.825 net/fm10k: not in enabled drivers build config 00:02:01.825 net/gve: not in enabled drivers build config 00:02:01.825 net/hinic: not in enabled drivers build config 00:02:01.825 net/hns3: not in enabled drivers build config 00:02:01.825 net/i40e: not in enabled drivers build config 00:02:01.825 net/iavf: not in enabled drivers build config 00:02:01.825 net/ice: not in enabled drivers build config 00:02:01.825 net/idpf: not in enabled drivers build config 00:02:01.825 net/igc: not in enabled drivers build config 00:02:01.825 net/ionic: not in enabled drivers build config 00:02:01.825 net/ipn3ke: not in enabled drivers build config 00:02:01.825 net/ixgbe: not in enabled drivers build config 00:02:01.825 net/mana: not in enabled drivers build config 00:02:01.825 net/memif: not in enabled drivers build config 00:02:01.825 net/mlx4: not in enabled drivers build config 00:02:01.825 net/mlx5: not in enabled drivers build config 00:02:01.825 net/mvneta: not in enabled drivers build config 00:02:01.825 net/mvpp2: not in enabled drivers build config 00:02:01.825 net/netvsc: not in enabled drivers build config 00:02:01.825 net/nfb: not in enabled drivers build config 00:02:01.825 net/nfp: not in enabled drivers build config 00:02:01.825 net/ngbe: not in enabled drivers build config 00:02:01.825 net/null: not in enabled drivers build config 00:02:01.825 net/octeontx: not in enabled drivers build config 00:02:01.825 net/octeon_ep: not in enabled drivers build config 00:02:01.825 net/pcap: not in enabled drivers build config 00:02:01.825 net/pfe: not in enabled drivers build config 00:02:01.825 net/qede: not in enabled drivers build config 00:02:01.825 net/ring: not in enabled drivers build config 00:02:01.825 net/sfc: not in enabled drivers build config 00:02:01.825 net/softnic: not in enabled drivers build config 00:02:01.825 net/tap: not in enabled drivers build config 00:02:01.825 net/thunderx: not in enabled drivers build config 00:02:01.825 net/txgbe: not in enabled drivers build config 00:02:01.825 net/vdev_netvsc: not in enabled drivers build config 00:02:01.825 net/vhost: not in enabled drivers build config 00:02:01.825 net/virtio: not in enabled drivers build config 00:02:01.825 net/vmxnet3: not in enabled drivers build config 00:02:01.825 raw/*: missing internal dependency, "rawdev" 00:02:01.825 crypto/armv8: not in enabled drivers build config 00:02:01.825 crypto/bcmfs: not in enabled drivers build config 00:02:01.825 crypto/caam_jr: not in enabled drivers build config 00:02:01.825 crypto/ccp: not in enabled drivers build config 00:02:01.825 crypto/cnxk: not in enabled drivers build config 00:02:01.825 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.825 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.825 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.825 crypto/mlx5: not in enabled drivers build config 00:02:01.825 crypto/mvsam: not in enabled drivers build config 00:02:01.825 crypto/nitrox: not in enabled drivers build config 00:02:01.825 crypto/null: not in enabled drivers build config 00:02:01.825 crypto/octeontx: not in enabled drivers build config 00:02:01.825 crypto/openssl: not in enabled drivers build config 00:02:01.825 crypto/scheduler: not in enabled drivers build config 00:02:01.825 crypto/uadk: not in enabled drivers build config 00:02:01.825 crypto/virtio: not in enabled drivers build config 00:02:01.825 compress/isal: not in enabled drivers build config 00:02:01.825 compress/mlx5: not in enabled drivers build config 00:02:01.825 compress/nitrox: not in enabled drivers build config 00:02:01.825 compress/octeontx: not in enabled drivers build config 00:02:01.825 compress/zlib: not in enabled drivers build config 00:02:01.825 regex/*: missing internal dependency, "regexdev" 00:02:01.825 ml/*: missing internal dependency, "mldev" 00:02:01.825 vdpa/ifc: not in enabled drivers build config 00:02:01.825 vdpa/mlx5: not in enabled drivers build config 00:02:01.825 vdpa/nfp: not in enabled drivers build config 00:02:01.825 vdpa/sfc: not in enabled drivers build config 00:02:01.825 event/*: missing internal dependency, "eventdev" 00:02:01.825 baseband/*: missing internal dependency, "bbdev" 00:02:01.825 gpu/*: missing internal dependency, "gpudev" 00:02:01.825 00:02:01.825 00:02:01.825 Build targets in project: 84 00:02:01.825 00:02:01.825 DPDK 24.03.0 00:02:01.825 00:02:01.825 User defined options 00:02:01.825 buildtype : debug 00:02:01.825 default_library : shared 00:02:01.825 libdir : lib 00:02:01.825 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:01.825 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:01.825 c_link_args : 00:02:01.825 cpu_instruction_set: native 00:02:01.825 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:01.825 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:01.825 enable_docs : false 00:02:01.825 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:01.825 enable_kmods : false 00:02:01.825 tests : false 00:02:01.825 00:02:01.825 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.825 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:01.825 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.825 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.825 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.825 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.825 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.825 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.825 [7/267] Linking static target lib/librte_log.a 00:02:01.825 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.825 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.825 [10/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.825 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.826 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.826 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.826 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.826 [15/267] Linking static target lib/librte_kvargs.a 00:02:01.826 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.826 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.826 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.826 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.826 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.826 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:01.826 [22/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.826 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.826 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.826 [25/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.826 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.826 [27/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.826 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.826 [29/267] Linking static target lib/librte_pci.a 00:02:01.826 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.826 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.084 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.084 [33/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:02.084 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:02.084 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:02.084 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.084 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.084 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.085 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.085 [40/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:02.085 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.085 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.085 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.085 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.085 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.085 [46/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.085 [47/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.085 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.085 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.085 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.085 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.085 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.085 [53/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.085 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.085 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.085 [56/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.085 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.085 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.085 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.085 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.085 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.085 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.085 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.085 [64/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:02.085 [65/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.085 [66/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.085 [67/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.085 [68/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:02.344 [69/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.344 [70/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:02.344 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.344 [72/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.344 [73/267] Linking static target lib/librte_telemetry.a 00:02:02.344 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.344 [75/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:02.344 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.344 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.344 [78/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.344 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.344 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.344 [81/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:02.344 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.344 [83/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.344 [84/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.344 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.344 [86/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:02.344 [87/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:02.344 [88/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.344 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.344 [90/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.344 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:02.344 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.344 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.344 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.344 [95/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.344 [96/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.344 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:02.344 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.344 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.344 [100/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:02.344 [101/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.344 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.344 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:02.344 [104/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.344 [105/267] Linking static target lib/librte_meter.a 00:02:02.344 [106/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.344 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.344 [108/267] Linking static target lib/librte_mempool.a 00:02:02.344 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.344 [110/267] Linking static target lib/librte_rcu.a 00:02:02.344 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.344 [112/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.344 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.344 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.344 [115/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.344 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.344 [117/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:02.344 [118/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.344 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.344 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.344 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.344 [122/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:02.344 [123/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.344 [124/267] Linking static target lib/librte_ring.a 00:02:02.344 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.344 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.344 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.344 [128/267] Linking static target lib/librte_timer.a 00:02:02.344 [129/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.344 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.344 [131/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:02.344 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.344 [133/267] Linking static target lib/librte_cmdline.a 00:02:02.344 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.344 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.344 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:02.344 [137/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.344 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.344 [139/267] Linking static target lib/librte_reorder.a 00:02:02.344 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:02.344 [141/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.344 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.344 [143/267] Linking static target lib/librte_compressdev.a 00:02:02.344 [144/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.344 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.344 [146/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.344 [147/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.344 [148/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.344 [149/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.344 [150/267] Linking static target lib/librte_dmadev.a 00:02:02.344 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.344 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.344 [153/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.344 [154/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.344 [155/267] Linking static target lib/librte_net.a 00:02:02.344 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.344 [157/267] Linking static target lib/librte_power.a 00:02:02.345 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.345 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.345 [160/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.345 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.345 [162/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.345 [163/267] Linking target lib/librte_log.so.24.1 00:02:02.345 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:02.345 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.345 [166/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.605 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.605 [168/267] Linking static target lib/librte_eal.a 00:02:02.605 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.605 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:02.605 [171/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:02.605 [172/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.605 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.605 [174/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.605 [175/267] Linking static target lib/librte_hash.a 00:02:02.605 [176/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.605 [177/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.605 [178/267] Linking static target lib/librte_security.a 00:02:02.605 [179/267] Linking static target lib/librte_mbuf.a 00:02:02.605 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.605 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.605 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.605 [183/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:02.605 [184/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.605 [185/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.605 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.605 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.605 [188/267] Linking target lib/librte_kvargs.so.24.1 00:02:02.605 [189/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.605 [190/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:02.605 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.605 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.605 [193/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.605 [194/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.605 [195/267] Linking static target drivers/librte_bus_vdev.a 00:02:02.865 [196/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.865 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:02.865 [198/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.865 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.865 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.865 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [204/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.865 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.865 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.865 [207/267] Linking static target drivers/librte_bus_pci.a 00:02:02.865 [208/267] Linking static target drivers/librte_mempool_ring.a 00:02:02.865 [209/267] Linking target lib/librte_telemetry.so.24.1 00:02:02.865 [210/267] Linking static target lib/librte_cryptodev.a 00:02:02.865 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.125 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:03.125 [214/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.125 [215/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:03.125 [216/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.125 [217/267] Linking static target lib/librte_ethdev.a 00:02:03.125 [218/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.125 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.125 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.125 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.386 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.386 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.647 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.647 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.647 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.218 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.218 [228/267] Linking static target lib/librte_vhost.a 00:02:05.159 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.542 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.127 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.508 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.508 [233/267] Linking target lib/librte_eal.so.24.1 00:02:14.508 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.508 [235/267] Linking target lib/librte_ring.so.24.1 00:02:14.508 [236/267] Linking target lib/librte_meter.so.24.1 00:02:14.508 [237/267] Linking target lib/librte_dmadev.so.24.1 00:02:14.508 [238/267] Linking target lib/librte_pci.so.24.1 00:02:14.508 [239/267] Linking target lib/librte_timer.so.24.1 00:02:14.508 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.768 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.768 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.768 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.768 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.768 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.768 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:14.768 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:14.768 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:15.027 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:15.028 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:15.028 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.028 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:15.286 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.286 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:15.286 [255/267] Linking target lib/librte_net.so.24.1 00:02:15.286 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:15.286 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:15.286 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:15.286 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:15.545 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:15.545 [261/267] Linking target lib/librte_hash.so.24.1 00:02:15.545 [262/267] Linking target lib/librte_security.so.24.1 00:02:15.545 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:15.545 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.545 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.545 [266/267] Linking target lib/librte_power.so.24.1 00:02:15.806 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:15.806 INFO: autodetecting backend as ninja 00:02:15.806 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:16.745 CC lib/ut_mock/mock.o 00:02:16.745 CC lib/log/log.o 00:02:16.745 CC lib/log/log_flags.o 00:02:16.745 CC lib/log/log_deprecated.o 00:02:16.745 CC lib/ut/ut.o 00:02:17.006 LIB libspdk_ut_mock.a 00:02:17.006 LIB libspdk_log.a 00:02:17.006 LIB libspdk_ut.a 00:02:17.006 SO libspdk_ut_mock.so.6.0 00:02:17.006 SO libspdk_log.so.7.0 00:02:17.006 SO libspdk_ut.so.2.0 00:02:17.006 SYMLINK libspdk_ut_mock.so 00:02:17.006 SYMLINK libspdk_ut.so 00:02:17.006 SYMLINK libspdk_log.so 00:02:17.575 CC lib/ioat/ioat.o 00:02:17.575 CC lib/dma/dma.o 00:02:17.575 CXX lib/trace_parser/trace.o 00:02:17.575 CC lib/util/base64.o 00:02:17.575 CC lib/util/bit_array.o 00:02:17.575 CC lib/util/cpuset.o 00:02:17.575 CC lib/util/crc16.o 00:02:17.575 CC lib/util/crc32.o 00:02:17.575 CC lib/util/crc32c.o 00:02:17.575 CC lib/util/crc32_ieee.o 00:02:17.575 CC lib/util/crc64.o 00:02:17.575 CC lib/util/dif.o 00:02:17.575 CC lib/util/fd.o 00:02:17.575 CC lib/util/hexlify.o 00:02:17.575 CC lib/util/file.o 00:02:17.575 CC lib/util/iov.o 00:02:17.575 CC lib/util/math.o 00:02:17.575 CC lib/util/pipe.o 00:02:17.575 CC lib/util/strerror_tls.o 00:02:17.575 CC lib/util/string.o 00:02:17.575 CC lib/util/uuid.o 00:02:17.576 CC lib/util/fd_group.o 00:02:17.576 CC lib/util/xor.o 00:02:17.576 CC lib/util/zipf.o 00:02:17.576 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.576 CC lib/vfio_user/host/vfio_user.o 00:02:17.576 LIB libspdk_ioat.a 00:02:17.576 LIB libspdk_dma.a 00:02:17.836 SO libspdk_dma.so.4.0 00:02:17.836 SO libspdk_ioat.so.7.0 00:02:17.836 SYMLINK libspdk_dma.so 00:02:17.836 SYMLINK libspdk_ioat.so 00:02:17.836 LIB libspdk_vfio_user.a 00:02:17.836 SO libspdk_vfio_user.so.5.0 00:02:17.836 LIB libspdk_util.a 00:02:18.096 SYMLINK libspdk_vfio_user.so 00:02:18.096 SO libspdk_util.so.9.0 00:02:18.096 SYMLINK libspdk_util.so 00:02:18.096 LIB libspdk_trace_parser.a 00:02:18.356 SO libspdk_trace_parser.so.5.0 00:02:18.356 SYMLINK libspdk_trace_parser.so 00:02:18.616 CC lib/env_dpdk/env.o 00:02:18.616 CC lib/env_dpdk/memory.o 00:02:18.616 CC lib/env_dpdk/init.o 00:02:18.616 CC lib/env_dpdk/pci.o 00:02:18.616 CC lib/env_dpdk/threads.o 00:02:18.616 CC lib/env_dpdk/pci_ioat.o 00:02:18.616 CC lib/env_dpdk/pci_virtio.o 00:02:18.616 CC lib/env_dpdk/pci_event.o 00:02:18.616 CC lib/env_dpdk/pci_vmd.o 00:02:18.616 CC lib/env_dpdk/pci_idxd.o 00:02:18.616 CC lib/idxd/idxd.o 00:02:18.616 CC lib/idxd/idxd_user.o 00:02:18.616 CC lib/idxd/idxd_kernel.o 00:02:18.616 CC lib/json/json_parse.o 00:02:18.616 CC lib/env_dpdk/sigbus_handler.o 00:02:18.616 CC lib/json/json_write.o 00:02:18.616 CC lib/json/json_util.o 00:02:18.616 CC lib/env_dpdk/pci_dpdk.o 00:02:18.616 CC lib/rdma/common.o 00:02:18.616 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.616 CC lib/rdma/rdma_verbs.o 00:02:18.616 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.616 CC lib/conf/conf.o 00:02:18.616 CC lib/vmd/vmd.o 00:02:18.616 CC lib/vmd/led.o 00:02:18.876 LIB libspdk_conf.a 00:02:18.876 LIB libspdk_rdma.a 00:02:18.876 SO libspdk_conf.so.6.0 00:02:18.876 LIB libspdk_json.a 00:02:18.876 SO libspdk_rdma.so.6.0 00:02:18.876 SO libspdk_json.so.6.0 00:02:18.876 SYMLINK libspdk_conf.so 00:02:18.876 SYMLINK libspdk_rdma.so 00:02:18.876 SYMLINK libspdk_json.so 00:02:18.876 LIB libspdk_vmd.a 00:02:18.876 LIB libspdk_idxd.a 00:02:18.876 SO libspdk_vmd.so.6.0 00:02:19.137 SO libspdk_idxd.so.12.0 00:02:19.137 SYMLINK libspdk_vmd.so 00:02:19.137 SYMLINK libspdk_idxd.so 00:02:19.137 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.137 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.137 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.137 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.398 LIB libspdk_jsonrpc.a 00:02:19.659 SO libspdk_jsonrpc.so.6.0 00:02:19.659 SYMLINK libspdk_jsonrpc.so 00:02:19.659 LIB libspdk_env_dpdk.a 00:02:19.659 SO libspdk_env_dpdk.so.14.1 00:02:19.919 SYMLINK libspdk_env_dpdk.so 00:02:19.919 CC lib/rpc/rpc.o 00:02:20.180 LIB libspdk_rpc.a 00:02:20.180 SO libspdk_rpc.so.6.0 00:02:20.180 SYMLINK libspdk_rpc.so 00:02:20.753 CC lib/notify/notify.o 00:02:20.753 CC lib/notify/notify_rpc.o 00:02:20.753 CC lib/keyring/keyring.o 00:02:20.753 CC lib/keyring/keyring_rpc.o 00:02:20.753 CC lib/trace/trace.o 00:02:20.753 CC lib/trace/trace_flags.o 00:02:20.753 CC lib/trace/trace_rpc.o 00:02:20.753 LIB libspdk_notify.a 00:02:20.753 LIB libspdk_keyring.a 00:02:20.753 SO libspdk_notify.so.6.0 00:02:20.753 SO libspdk_keyring.so.1.0 00:02:21.015 LIB libspdk_trace.a 00:02:21.015 SYMLINK libspdk_notify.so 00:02:21.015 SO libspdk_trace.so.10.0 00:02:21.015 SYMLINK libspdk_keyring.so 00:02:21.015 SYMLINK libspdk_trace.so 00:02:21.276 CC lib/thread/thread.o 00:02:21.276 CC lib/sock/sock.o 00:02:21.276 CC lib/thread/iobuf.o 00:02:21.276 CC lib/sock/sock_rpc.o 00:02:21.849 LIB libspdk_sock.a 00:02:21.849 SO libspdk_sock.so.9.0 00:02:21.849 SYMLINK libspdk_sock.so 00:02:22.110 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.110 CC lib/nvme/nvme_ctrlr.o 00:02:22.110 CC lib/nvme/nvme_fabric.o 00:02:22.110 CC lib/nvme/nvme_ns_cmd.o 00:02:22.110 CC lib/nvme/nvme_ns.o 00:02:22.110 CC lib/nvme/nvme_pcie_common.o 00:02:22.110 CC lib/nvme/nvme_pcie.o 00:02:22.110 CC lib/nvme/nvme_qpair.o 00:02:22.110 CC lib/nvme/nvme.o 00:02:22.110 CC lib/nvme/nvme_quirks.o 00:02:22.110 CC lib/nvme/nvme_discovery.o 00:02:22.110 CC lib/nvme/nvme_transport.o 00:02:22.110 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.110 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.110 CC lib/nvme/nvme_tcp.o 00:02:22.110 CC lib/nvme/nvme_opal.o 00:02:22.110 CC lib/nvme/nvme_io_msg.o 00:02:22.110 CC lib/nvme/nvme_poll_group.o 00:02:22.110 CC lib/nvme/nvme_zns.o 00:02:22.110 CC lib/nvme/nvme_stubs.o 00:02:22.110 CC lib/nvme/nvme_auth.o 00:02:22.110 CC lib/nvme/nvme_cuse.o 00:02:22.110 CC lib/nvme/nvme_vfio_user.o 00:02:22.110 CC lib/nvme/nvme_rdma.o 00:02:22.682 LIB libspdk_thread.a 00:02:22.682 SO libspdk_thread.so.10.0 00:02:22.682 SYMLINK libspdk_thread.so 00:02:22.943 CC lib/blob/blobstore.o 00:02:22.943 CC lib/blob/request.o 00:02:22.943 CC lib/virtio/virtio.o 00:02:22.943 CC lib/blob/zeroes.o 00:02:22.943 CC lib/virtio/virtio_vhost_user.o 00:02:22.943 CC lib/blob/blob_bs_dev.o 00:02:22.943 CC lib/virtio/virtio_vfio_user.o 00:02:22.943 CC lib/virtio/virtio_pci.o 00:02:22.943 CC lib/vfu_tgt/tgt_endpoint.o 00:02:23.203 CC lib/init/json_config.o 00:02:23.203 CC lib/vfu_tgt/tgt_rpc.o 00:02:23.203 CC lib/init/subsystem.o 00:02:23.203 CC lib/init/subsystem_rpc.o 00:02:23.203 CC lib/init/rpc.o 00:02:23.203 CC lib/accel/accel.o 00:02:23.203 CC lib/accel/accel_rpc.o 00:02:23.203 CC lib/accel/accel_sw.o 00:02:23.203 LIB libspdk_init.a 00:02:23.463 SO libspdk_init.so.5.0 00:02:23.463 LIB libspdk_virtio.a 00:02:23.463 LIB libspdk_vfu_tgt.a 00:02:23.463 SYMLINK libspdk_init.so 00:02:23.463 SO libspdk_virtio.so.7.0 00:02:23.463 SO libspdk_vfu_tgt.so.3.0 00:02:23.463 SYMLINK libspdk_vfu_tgt.so 00:02:23.463 SYMLINK libspdk_virtio.so 00:02:23.725 CC lib/event/app.o 00:02:23.725 CC lib/event/reactor.o 00:02:23.725 CC lib/event/log_rpc.o 00:02:23.725 CC lib/event/app_rpc.o 00:02:23.725 CC lib/event/scheduler_static.o 00:02:23.986 LIB libspdk_accel.a 00:02:23.986 LIB libspdk_nvme.a 00:02:23.986 SO libspdk_accel.so.15.0 00:02:23.986 SO libspdk_nvme.so.13.1 00:02:23.986 SYMLINK libspdk_accel.so 00:02:23.986 LIB libspdk_event.a 00:02:24.289 SO libspdk_event.so.13.1 00:02:24.289 SYMLINK libspdk_event.so 00:02:24.289 SYMLINK libspdk_nvme.so 00:02:24.289 CC lib/bdev/bdev.o 00:02:24.289 CC lib/bdev/bdev_rpc.o 00:02:24.289 CC lib/bdev/bdev_zone.o 00:02:24.289 CC lib/bdev/part.o 00:02:24.289 CC lib/bdev/scsi_nvme.o 00:02:25.701 LIB libspdk_blob.a 00:02:25.701 SO libspdk_blob.so.11.0 00:02:25.701 SYMLINK libspdk_blob.so 00:02:25.961 CC lib/lvol/lvol.o 00:02:25.961 CC lib/blobfs/blobfs.o 00:02:25.961 CC lib/blobfs/tree.o 00:02:26.533 LIB libspdk_bdev.a 00:02:26.533 SO libspdk_bdev.so.15.0 00:02:26.533 SYMLINK libspdk_bdev.so 00:02:26.533 LIB libspdk_blobfs.a 00:02:26.795 SO libspdk_blobfs.so.10.0 00:02:26.795 LIB libspdk_lvol.a 00:02:26.795 SO libspdk_lvol.so.10.0 00:02:26.795 SYMLINK libspdk_blobfs.so 00:02:26.795 SYMLINK libspdk_lvol.so 00:02:27.057 CC lib/nbd/nbd.o 00:02:27.057 CC lib/nbd/nbd_rpc.o 00:02:27.057 CC lib/scsi/dev.o 00:02:27.057 CC lib/scsi/lun.o 00:02:27.057 CC lib/scsi/port.o 00:02:27.057 CC lib/scsi/scsi.o 00:02:27.057 CC lib/scsi/scsi_bdev.o 00:02:27.057 CC lib/scsi/scsi_pr.o 00:02:27.057 CC lib/ftl/ftl_core.o 00:02:27.057 CC lib/ublk/ublk_rpc.o 00:02:27.057 CC lib/scsi/scsi_rpc.o 00:02:27.057 CC lib/ftl/ftl_init.o 00:02:27.057 CC lib/nvmf/ctrlr.o 00:02:27.057 CC lib/ublk/ublk.o 00:02:27.057 CC lib/scsi/task.o 00:02:27.057 CC lib/ftl/ftl_layout.o 00:02:27.057 CC lib/nvmf/ctrlr_discovery.o 00:02:27.057 CC lib/ftl/ftl_debug.o 00:02:27.057 CC lib/nvmf/ctrlr_bdev.o 00:02:27.057 CC lib/ftl/ftl_io.o 00:02:27.057 CC lib/nvmf/subsystem.o 00:02:27.057 CC lib/ftl/ftl_sb.o 00:02:27.057 CC lib/nvmf/nvmf.o 00:02:27.057 CC lib/ftl/ftl_l2p.o 00:02:27.057 CC lib/nvmf/nvmf_rpc.o 00:02:27.057 CC lib/ftl/ftl_l2p_flat.o 00:02:27.057 CC lib/nvmf/transport.o 00:02:27.057 CC lib/ftl/ftl_nv_cache.o 00:02:27.057 CC lib/ftl/ftl_band.o 00:02:27.057 CC lib/nvmf/tcp.o 00:02:27.057 CC lib/nvmf/stubs.o 00:02:27.057 CC lib/ftl/ftl_band_ops.o 00:02:27.057 CC lib/ftl/ftl_writer.o 00:02:27.057 CC lib/nvmf/mdns_server.o 00:02:27.057 CC lib/ftl/ftl_rq.o 00:02:27.057 CC lib/nvmf/vfio_user.o 00:02:27.057 CC lib/ftl/ftl_reloc.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt.o 00:02:27.057 CC lib/nvmf/rdma.o 00:02:27.057 CC lib/ftl/ftl_l2p_cache.o 00:02:27.057 CC lib/ftl/ftl_p2l.o 00:02:27.057 CC lib/nvmf/auth.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.057 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.057 CC lib/ftl/utils/ftl_conf.o 00:02:27.057 CC lib/ftl/utils/ftl_mempool.o 00:02:27.057 CC lib/ftl/utils/ftl_md.o 00:02:27.057 CC lib/ftl/utils/ftl_property.o 00:02:27.057 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.057 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.057 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.057 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.057 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.057 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.057 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:27.057 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:27.057 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:27.057 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:27.057 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:27.057 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:27.057 CC lib/ftl/base/ftl_base_bdev.o 00:02:27.057 CC lib/ftl/base/ftl_base_dev.o 00:02:27.057 CC lib/ftl/ftl_trace.o 00:02:27.627 LIB libspdk_scsi.a 00:02:27.627 LIB libspdk_nbd.a 00:02:27.627 SO libspdk_scsi.so.9.0 00:02:27.627 SO libspdk_nbd.so.7.0 00:02:27.627 SYMLINK libspdk_scsi.so 00:02:27.627 SYMLINK libspdk_nbd.so 00:02:27.627 LIB libspdk_ublk.a 00:02:27.888 SO libspdk_ublk.so.3.0 00:02:27.888 SYMLINK libspdk_ublk.so 00:02:27.888 LIB libspdk_ftl.a 00:02:27.888 CC lib/vhost/vhost.o 00:02:27.888 CC lib/vhost/vhost_rpc.o 00:02:27.888 CC lib/vhost/vhost_scsi.o 00:02:27.888 CC lib/iscsi/conn.o 00:02:27.888 CC lib/vhost/rte_vhost_user.o 00:02:27.888 CC lib/vhost/vhost_blk.o 00:02:27.888 CC lib/iscsi/init_grp.o 00:02:27.888 CC lib/iscsi/iscsi.o 00:02:27.888 CC lib/iscsi/param.o 00:02:27.888 CC lib/iscsi/md5.o 00:02:27.888 CC lib/iscsi/portal_grp.o 00:02:27.888 CC lib/iscsi/tgt_node.o 00:02:27.888 CC lib/iscsi/iscsi_subsystem.o 00:02:27.888 CC lib/iscsi/iscsi_rpc.o 00:02:27.888 CC lib/iscsi/task.o 00:02:28.149 SO libspdk_ftl.so.9.0 00:02:28.409 SYMLINK libspdk_ftl.so 00:02:28.669 LIB libspdk_nvmf.a 00:02:28.930 SO libspdk_nvmf.so.18.1 00:02:28.930 LIB libspdk_vhost.a 00:02:28.930 SO libspdk_vhost.so.8.0 00:02:28.930 SYMLINK libspdk_nvmf.so 00:02:28.930 SYMLINK libspdk_vhost.so 00:02:29.191 LIB libspdk_iscsi.a 00:02:29.191 SO libspdk_iscsi.so.8.0 00:02:29.451 SYMLINK libspdk_iscsi.so 00:02:30.024 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.024 CC module/vfu_device/vfu_virtio.o 00:02:30.024 CC module/vfu_device/vfu_virtio_blk.o 00:02:30.024 CC module/vfu_device/vfu_virtio_scsi.o 00:02:30.024 CC module/vfu_device/vfu_virtio_rpc.o 00:02:30.024 CC module/accel/error/accel_error.o 00:02:30.024 CC module/accel/error/accel_error_rpc.o 00:02:30.024 LIB libspdk_env_dpdk_rpc.a 00:02:30.024 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.024 CC module/blob/bdev/blob_bdev.o 00:02:30.024 CC module/accel/dsa/accel_dsa.o 00:02:30.024 CC module/sock/posix/posix.o 00:02:30.024 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:30.024 CC module/keyring/linux/keyring.o 00:02:30.024 CC module/accel/iaa/accel_iaa.o 00:02:30.024 CC module/keyring/linux/keyring_rpc.o 00:02:30.024 CC module/accel/iaa/accel_iaa_rpc.o 00:02:30.024 CC module/accel/ioat/accel_ioat.o 00:02:30.024 CC module/accel/ioat/accel_ioat_rpc.o 00:02:30.024 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:30.024 CC module/scheduler/gscheduler/gscheduler.o 00:02:30.024 CC module/keyring/file/keyring.o 00:02:30.024 CC module/keyring/file/keyring_rpc.o 00:02:30.024 SO libspdk_env_dpdk_rpc.so.6.0 00:02:30.285 SYMLINK libspdk_env_dpdk_rpc.so 00:02:30.285 LIB libspdk_accel_error.a 00:02:30.285 LIB libspdk_keyring_linux.a 00:02:30.285 LIB libspdk_scheduler_gscheduler.a 00:02:30.285 LIB libspdk_scheduler_dynamic.a 00:02:30.285 LIB libspdk_scheduler_dpdk_governor.a 00:02:30.285 LIB libspdk_keyring_file.a 00:02:30.285 SO libspdk_accel_error.so.2.0 00:02:30.285 SO libspdk_keyring_linux.so.1.0 00:02:30.285 LIB libspdk_accel_ioat.a 00:02:30.285 SO libspdk_scheduler_gscheduler.so.4.0 00:02:30.285 LIB libspdk_accel_iaa.a 00:02:30.285 SO libspdk_scheduler_dynamic.so.4.0 00:02:30.285 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:30.285 SO libspdk_keyring_file.so.1.0 00:02:30.285 LIB libspdk_accel_dsa.a 00:02:30.285 SO libspdk_accel_ioat.so.6.0 00:02:30.285 LIB libspdk_blob_bdev.a 00:02:30.285 SYMLINK libspdk_accel_error.so 00:02:30.285 SYMLINK libspdk_keyring_linux.so 00:02:30.285 SO libspdk_accel_iaa.so.3.0 00:02:30.285 SYMLINK libspdk_scheduler_gscheduler.so 00:02:30.285 SO libspdk_blob_bdev.so.11.0 00:02:30.285 SYMLINK libspdk_keyring_file.so 00:02:30.285 SO libspdk_accel_dsa.so.5.0 00:02:30.285 SYMLINK libspdk_scheduler_dynamic.so 00:02:30.285 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:30.285 SYMLINK libspdk_accel_iaa.so 00:02:30.285 SYMLINK libspdk_accel_ioat.so 00:02:30.545 LIB libspdk_vfu_device.a 00:02:30.545 SYMLINK libspdk_blob_bdev.so 00:02:30.545 SYMLINK libspdk_accel_dsa.so 00:02:30.545 SO libspdk_vfu_device.so.3.0 00:02:30.545 SYMLINK libspdk_vfu_device.so 00:02:30.804 LIB libspdk_sock_posix.a 00:02:30.804 SO libspdk_sock_posix.so.6.0 00:02:30.804 SYMLINK libspdk_sock_posix.so 00:02:31.064 CC module/bdev/malloc/bdev_malloc.o 00:02:31.064 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.064 CC module/bdev/error/vbdev_error.o 00:02:31.064 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.064 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.064 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.064 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.064 CC module/bdev/raid/bdev_raid.o 00:02:31.064 CC module/bdev/nvme/bdev_nvme.o 00:02:31.064 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.064 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.064 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.064 CC module/bdev/nvme/nvme_rpc.o 00:02:31.064 CC module/bdev/raid/raid0.o 00:02:31.064 CC module/bdev/nvme/vbdev_opal.o 00:02:31.064 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.064 CC module/bdev/raid/raid1.o 00:02:31.064 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.064 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.064 CC module/bdev/raid/concat.o 00:02:31.064 CC module/bdev/delay/vbdev_delay.o 00:02:31.064 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.064 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.064 CC module/bdev/gpt/gpt.o 00:02:31.064 CC module/bdev/null/bdev_null.o 00:02:31.064 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.064 CC module/bdev/split/vbdev_split.o 00:02:31.064 CC module/bdev/null/bdev_null_rpc.o 00:02:31.064 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.064 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.064 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.064 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.064 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.064 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.064 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.064 CC module/bdev/aio/bdev_aio.o 00:02:31.064 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.064 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.064 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.064 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.064 CC module/bdev/ftl/bdev_ftl.o 00:02:31.064 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.324 LIB libspdk_blobfs_bdev.a 00:02:31.324 SO libspdk_blobfs_bdev.so.6.0 00:02:31.324 LIB libspdk_bdev_error.a 00:02:31.324 LIB libspdk_bdev_null.a 00:02:31.324 SYMLINK libspdk_blobfs_bdev.so 00:02:31.324 SO libspdk_bdev_error.so.6.0 00:02:31.324 SO libspdk_bdev_null.so.6.0 00:02:31.324 LIB libspdk_bdev_gpt.a 00:02:31.324 LIB libspdk_bdev_split.a 00:02:31.324 LIB libspdk_bdev_aio.a 00:02:31.324 LIB libspdk_bdev_ftl.a 00:02:31.324 LIB libspdk_bdev_malloc.a 00:02:31.324 SO libspdk_bdev_gpt.so.6.0 00:02:31.324 SO libspdk_bdev_split.so.6.0 00:02:31.324 LIB libspdk_bdev_zone_block.a 00:02:31.324 LIB libspdk_bdev_passthru.a 00:02:31.324 SYMLINK libspdk_bdev_error.so 00:02:31.324 LIB libspdk_bdev_iscsi.a 00:02:31.324 SYMLINK libspdk_bdev_null.so 00:02:31.324 SO libspdk_bdev_malloc.so.6.0 00:02:31.324 SO libspdk_bdev_ftl.so.6.0 00:02:31.324 SO libspdk_bdev_aio.so.6.0 00:02:31.324 SO libspdk_bdev_zone_block.so.6.0 00:02:31.324 SO libspdk_bdev_passthru.so.6.0 00:02:31.324 LIB libspdk_bdev_delay.a 00:02:31.324 SO libspdk_bdev_iscsi.so.6.0 00:02:31.324 SYMLINK libspdk_bdev_split.so 00:02:31.324 SYMLINK libspdk_bdev_gpt.so 00:02:31.324 SYMLINK libspdk_bdev_malloc.so 00:02:31.324 SYMLINK libspdk_bdev_ftl.so 00:02:31.324 SO libspdk_bdev_delay.so.6.0 00:02:31.586 SYMLINK libspdk_bdev_aio.so 00:02:31.586 SYMLINK libspdk_bdev_zone_block.so 00:02:31.586 SYMLINK libspdk_bdev_passthru.so 00:02:31.586 SYMLINK libspdk_bdev_iscsi.so 00:02:31.586 LIB libspdk_bdev_lvol.a 00:02:31.586 SYMLINK libspdk_bdev_delay.so 00:02:31.586 LIB libspdk_bdev_virtio.a 00:02:31.586 SO libspdk_bdev_lvol.so.6.0 00:02:31.586 SO libspdk_bdev_virtio.so.6.0 00:02:31.586 SYMLINK libspdk_bdev_lvol.so 00:02:31.586 SYMLINK libspdk_bdev_virtio.so 00:02:31.848 LIB libspdk_bdev_raid.a 00:02:31.848 SO libspdk_bdev_raid.so.6.0 00:02:31.848 SYMLINK libspdk_bdev_raid.so 00:02:32.792 LIB libspdk_bdev_nvme.a 00:02:32.792 SO libspdk_bdev_nvme.so.7.0 00:02:33.053 SYMLINK libspdk_bdev_nvme.so 00:02:33.625 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:33.625 CC module/event/subsystems/vmd/vmd.o 00:02:33.625 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.625 CC module/event/subsystems/keyring/keyring.o 00:02:33.625 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.625 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.625 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.625 CC module/event/subsystems/sock/sock.o 00:02:33.625 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.886 LIB libspdk_event_vfu_tgt.a 00:02:33.886 LIB libspdk_event_scheduler.a 00:02:33.886 LIB libspdk_event_keyring.a 00:02:33.886 LIB libspdk_event_vhost_blk.a 00:02:33.886 LIB libspdk_event_vmd.a 00:02:33.886 LIB libspdk_event_iobuf.a 00:02:33.886 LIB libspdk_event_sock.a 00:02:33.886 SO libspdk_event_vfu_tgt.so.3.0 00:02:33.887 SO libspdk_event_scheduler.so.4.0 00:02:33.887 SO libspdk_event_vhost_blk.so.3.0 00:02:33.887 SO libspdk_event_keyring.so.1.0 00:02:33.887 SO libspdk_event_iobuf.so.3.0 00:02:33.887 SO libspdk_event_vmd.so.6.0 00:02:33.887 SO libspdk_event_sock.so.5.0 00:02:33.887 SYMLINK libspdk_event_scheduler.so 00:02:33.887 SYMLINK libspdk_event_vfu_tgt.so 00:02:33.887 SYMLINK libspdk_event_vhost_blk.so 00:02:33.887 SYMLINK libspdk_event_keyring.so 00:02:33.887 SYMLINK libspdk_event_iobuf.so 00:02:33.887 SYMLINK libspdk_event_vmd.so 00:02:33.887 SYMLINK libspdk_event_sock.so 00:02:34.459 CC module/event/subsystems/accel/accel.o 00:02:34.459 LIB libspdk_event_accel.a 00:02:34.459 SO libspdk_event_accel.so.6.0 00:02:34.459 SYMLINK libspdk_event_accel.so 00:02:35.037 CC module/event/subsystems/bdev/bdev.o 00:02:35.037 LIB libspdk_event_bdev.a 00:02:35.037 SO libspdk_event_bdev.so.6.0 00:02:35.298 SYMLINK libspdk_event_bdev.so 00:02:35.559 CC module/event/subsystems/scsi/scsi.o 00:02:35.559 CC module/event/subsystems/ublk/ublk.o 00:02:35.559 CC module/event/subsystems/nbd/nbd.o 00:02:35.559 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:35.559 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:35.559 LIB libspdk_event_ublk.a 00:02:35.559 LIB libspdk_event_scsi.a 00:02:35.820 LIB libspdk_event_nbd.a 00:02:35.820 SO libspdk_event_scsi.so.6.0 00:02:35.820 SO libspdk_event_ublk.so.3.0 00:02:35.820 SO libspdk_event_nbd.so.6.0 00:02:35.820 LIB libspdk_event_nvmf.a 00:02:35.820 SYMLINK libspdk_event_scsi.so 00:02:35.820 SYMLINK libspdk_event_ublk.so 00:02:35.820 SO libspdk_event_nvmf.so.6.0 00:02:35.820 SYMLINK libspdk_event_nbd.so 00:02:35.820 SYMLINK libspdk_event_nvmf.so 00:02:36.081 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.082 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.343 LIB libspdk_event_vhost_scsi.a 00:02:36.343 LIB libspdk_event_iscsi.a 00:02:36.343 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.343 SO libspdk_event_iscsi.so.6.0 00:02:36.343 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.343 SYMLINK libspdk_event_iscsi.so 00:02:36.603 SO libspdk.so.6.0 00:02:36.603 SYMLINK libspdk.so 00:02:37.184 CC app/trace_record/trace_record.o 00:02:37.184 CC app/spdk_nvme_identify/identify.o 00:02:37.184 CC app/spdk_lspci/spdk_lspci.o 00:02:37.184 TEST_HEADER include/spdk/accel.h 00:02:37.184 CC test/rpc_client/rpc_client_test.o 00:02:37.184 TEST_HEADER include/spdk/assert.h 00:02:37.184 TEST_HEADER include/spdk/barrier.h 00:02:37.184 TEST_HEADER include/spdk/accel_module.h 00:02:37.184 CXX app/trace/trace.o 00:02:37.184 TEST_HEADER include/spdk/base64.h 00:02:37.184 TEST_HEADER include/spdk/bdev.h 00:02:37.184 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.184 TEST_HEADER include/spdk/bdev_module.h 00:02:37.184 CC app/spdk_nvme_perf/perf.o 00:02:37.184 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.184 TEST_HEADER include/spdk/bit_pool.h 00:02:37.184 CC app/spdk_top/spdk_top.o 00:02:37.184 CC app/spdk_dd/spdk_dd.o 00:02:37.184 TEST_HEADER include/spdk/bit_array.h 00:02:37.184 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.184 TEST_HEADER include/spdk/blobfs.h 00:02:37.184 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.184 TEST_HEADER include/spdk/conf.h 00:02:37.184 TEST_HEADER include/spdk/config.h 00:02:37.184 CC app/vhost/vhost.o 00:02:37.184 TEST_HEADER include/spdk/blob.h 00:02:37.184 TEST_HEADER include/spdk/crc16.h 00:02:37.184 TEST_HEADER include/spdk/cpuset.h 00:02:37.184 TEST_HEADER include/spdk/crc32.h 00:02:37.184 TEST_HEADER include/spdk/dif.h 00:02:37.184 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.184 TEST_HEADER include/spdk/crc64.h 00:02:37.185 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.185 TEST_HEADER include/spdk/endian.h 00:02:37.185 TEST_HEADER include/spdk/dma.h 00:02:37.185 TEST_HEADER include/spdk/fd_group.h 00:02:37.185 TEST_HEADER include/spdk/event.h 00:02:37.185 TEST_HEADER include/spdk/env.h 00:02:37.185 TEST_HEADER include/spdk/file.h 00:02:37.185 TEST_HEADER include/spdk/fd.h 00:02:37.185 TEST_HEADER include/spdk/ftl.h 00:02:37.185 TEST_HEADER include/spdk/hexlify.h 00:02:37.185 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.185 TEST_HEADER include/spdk/histogram_data.h 00:02:37.185 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.185 TEST_HEADER include/spdk/idxd.h 00:02:37.185 TEST_HEADER include/spdk/init.h 00:02:37.185 TEST_HEADER include/spdk/ioat.h 00:02:37.185 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.185 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.185 CC app/nvmf_tgt/nvmf_main.o 00:02:37.185 TEST_HEADER include/spdk/json.h 00:02:37.185 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.185 TEST_HEADER include/spdk/keyring_module.h 00:02:37.185 TEST_HEADER include/spdk/likely.h 00:02:37.185 TEST_HEADER include/spdk/keyring.h 00:02:37.185 TEST_HEADER include/spdk/log.h 00:02:37.185 TEST_HEADER include/spdk/mmio.h 00:02:37.185 TEST_HEADER include/spdk/nbd.h 00:02:37.185 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.185 TEST_HEADER include/spdk/lvol.h 00:02:37.185 TEST_HEADER include/spdk/memory.h 00:02:37.185 TEST_HEADER include/spdk/notify.h 00:02:37.185 TEST_HEADER include/spdk/nvme.h 00:02:37.185 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.185 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.185 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.185 CC app/spdk_tgt/spdk_tgt.o 00:02:37.185 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.185 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.185 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.185 TEST_HEADER include/spdk/nvmf.h 00:02:37.185 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.185 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.185 TEST_HEADER include/spdk/opal_spec.h 00:02:37.185 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.185 TEST_HEADER include/spdk/opal.h 00:02:37.185 TEST_HEADER include/spdk/pci_ids.h 00:02:37.185 TEST_HEADER include/spdk/pipe.h 00:02:37.185 TEST_HEADER include/spdk/reduce.h 00:02:37.185 TEST_HEADER include/spdk/queue.h 00:02:37.185 TEST_HEADER include/spdk/rpc.h 00:02:37.185 TEST_HEADER include/spdk/scheduler.h 00:02:37.185 TEST_HEADER include/spdk/scsi.h 00:02:37.185 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.185 TEST_HEADER include/spdk/string.h 00:02:37.185 TEST_HEADER include/spdk/sock.h 00:02:37.185 TEST_HEADER include/spdk/thread.h 00:02:37.185 TEST_HEADER include/spdk/trace.h 00:02:37.185 TEST_HEADER include/spdk/stdinc.h 00:02:37.185 TEST_HEADER include/spdk/trace_parser.h 00:02:37.185 TEST_HEADER include/spdk/tree.h 00:02:37.185 TEST_HEADER include/spdk/util.h 00:02:37.185 TEST_HEADER include/spdk/ublk.h 00:02:37.185 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.185 TEST_HEADER include/spdk/version.h 00:02:37.185 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.185 TEST_HEADER include/spdk/uuid.h 00:02:37.185 TEST_HEADER include/spdk/vhost.h 00:02:37.185 TEST_HEADER include/spdk/vmd.h 00:02:37.185 TEST_HEADER include/spdk/zipf.h 00:02:37.185 TEST_HEADER include/spdk/xor.h 00:02:37.185 CXX test/cpp_headers/accel_module.o 00:02:37.185 CXX test/cpp_headers/accel.o 00:02:37.185 CXX test/cpp_headers/assert.o 00:02:37.185 CXX test/cpp_headers/barrier.o 00:02:37.185 CXX test/cpp_headers/base64.o 00:02:37.185 CXX test/cpp_headers/bdev.o 00:02:37.185 CXX test/cpp_headers/bdev_zone.o 00:02:37.185 CXX test/cpp_headers/bit_array.o 00:02:37.185 CXX test/cpp_headers/blob_bdev.o 00:02:37.185 CXX test/cpp_headers/bdev_module.o 00:02:37.185 CXX test/cpp_headers/bit_pool.o 00:02:37.185 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.185 CXX test/cpp_headers/blobfs.o 00:02:37.185 CXX test/cpp_headers/blob.o 00:02:37.185 CXX test/cpp_headers/conf.o 00:02:37.185 CXX test/cpp_headers/crc16.o 00:02:37.185 CXX test/cpp_headers/config.o 00:02:37.185 CXX test/cpp_headers/crc32.o 00:02:37.185 CXX test/cpp_headers/cpuset.o 00:02:37.185 CXX test/cpp_headers/crc64.o 00:02:37.185 CXX test/cpp_headers/dma.o 00:02:37.185 CXX test/cpp_headers/endian.o 00:02:37.185 CXX test/cpp_headers/dif.o 00:02:37.185 CXX test/cpp_headers/event.o 00:02:37.185 CXX test/cpp_headers/fd_group.o 00:02:37.185 CXX test/cpp_headers/env_dpdk.o 00:02:37.185 CXX test/cpp_headers/env.o 00:02:37.185 CXX test/cpp_headers/fd.o 00:02:37.185 CXX test/cpp_headers/file.o 00:02:37.185 CXX test/cpp_headers/ftl.o 00:02:37.185 CXX test/cpp_headers/gpt_spec.o 00:02:37.185 CXX test/cpp_headers/hexlify.o 00:02:37.185 CXX test/cpp_headers/histogram_data.o 00:02:37.185 CXX test/cpp_headers/idxd.o 00:02:37.185 CXX test/cpp_headers/idxd_spec.o 00:02:37.185 CXX test/cpp_headers/ioat.o 00:02:37.185 CXX test/cpp_headers/init.o 00:02:37.185 CXX test/cpp_headers/iscsi_spec.o 00:02:37.185 CXX test/cpp_headers/ioat_spec.o 00:02:37.185 CXX test/cpp_headers/json.o 00:02:37.185 CXX test/cpp_headers/keyring.o 00:02:37.185 CXX test/cpp_headers/jsonrpc.o 00:02:37.185 CXX test/cpp_headers/keyring_module.o 00:02:37.185 CXX test/cpp_headers/log.o 00:02:37.185 CXX test/cpp_headers/likely.o 00:02:37.185 CXX test/cpp_headers/lvol.o 00:02:37.185 CXX test/cpp_headers/memory.o 00:02:37.185 CXX test/cpp_headers/mmio.o 00:02:37.185 CXX test/cpp_headers/nbd.o 00:02:37.185 CXX test/cpp_headers/notify.o 00:02:37.185 CXX test/cpp_headers/nvme.o 00:02:37.185 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.185 CXX test/cpp_headers/nvme_intel.o 00:02:37.185 CC examples/nvme/arbitration/arbitration.o 00:02:37.185 CC examples/nvme/reconnect/reconnect.o 00:02:37.185 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.185 CC examples/nvme/hotplug/hotplug.o 00:02:37.185 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.185 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.185 CC examples/nvme/hello_world/hello_world.o 00:02:37.185 CXX test/cpp_headers/nvme_spec.o 00:02:37.449 CC test/nvme/reset/reset.o 00:02:37.449 CC test/app/jsoncat/jsoncat.o 00:02:37.449 CC test/event/reactor/reactor.o 00:02:37.449 CC test/event/event_perf/event_perf.o 00:02:37.449 CC test/nvme/sgl/sgl.o 00:02:37.449 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.449 CC examples/nvme/abort/abort.o 00:02:37.449 CC examples/sock/hello_world/hello_sock.o 00:02:37.449 CC test/nvme/e2edp/nvme_dp.o 00:02:37.449 CC examples/ioat/perf/perf.o 00:02:37.449 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.449 CC test/env/memory/memory_ut.o 00:02:37.449 CC test/env/vtophys/vtophys.o 00:02:37.449 CC examples/util/zipf/zipf.o 00:02:37.449 CC test/nvme/overhead/overhead.o 00:02:37.449 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.449 CC test/app/stub/stub.o 00:02:37.449 CC test/nvme/aer/aer.o 00:02:37.449 CC examples/idxd/perf/perf.o 00:02:37.449 CC examples/ioat/verify/verify.o 00:02:37.449 CC test/nvme/reserve/reserve.o 00:02:37.449 CC test/nvme/err_injection/err_injection.o 00:02:37.449 CC test/app/bdev_svc/bdev_svc.o 00:02:37.449 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.449 CC test/env/pci/pci_ut.o 00:02:37.449 CC test/app/histogram_perf/histogram_perf.o 00:02:37.449 CC test/event/reactor_perf/reactor_perf.o 00:02:37.449 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.449 CC examples/vmd/led/led.o 00:02:37.449 CC test/nvme/fdp/fdp.o 00:02:37.449 CC test/nvme/connect_stress/connect_stress.o 00:02:37.449 CC examples/accel/perf/accel_perf.o 00:02:37.449 CC test/dma/test_dma/test_dma.o 00:02:37.449 CC test/bdev/bdevio/bdevio.o 00:02:37.449 CC test/nvme/startup/startup.o 00:02:37.449 CC test/event/app_repeat/app_repeat.o 00:02:37.449 CC app/fio/nvme/fio_plugin.o 00:02:37.449 CC examples/nvmf/nvmf/nvmf.o 00:02:37.449 CC test/accel/dif/dif.o 00:02:37.449 CC test/nvme/boot_partition/boot_partition.o 00:02:37.449 CC examples/blob/cli/blobcli.o 00:02:37.449 CC examples/thread/thread/thread_ex.o 00:02:37.449 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.449 CC test/nvme/simple_copy/simple_copy.o 00:02:37.449 CC test/event/scheduler/scheduler.o 00:02:37.449 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.449 CC test/nvme/compliance/nvme_compliance.o 00:02:37.449 CC test/blobfs/mkfs/mkfs.o 00:02:37.449 CC test/thread/poller_perf/poller_perf.o 00:02:37.449 CC test/nvme/cuse/cuse.o 00:02:37.449 CC app/fio/bdev/fio_plugin.o 00:02:37.449 CC examples/blob/hello_world/hello_blob.o 00:02:37.449 LINK rpc_client_test 00:02:37.712 LINK spdk_lspci 00:02:37.712 LINK vhost 00:02:37.712 LINK spdk_tgt 00:02:37.712 LINK interrupt_tgt 00:02:37.712 CC test/env/mem_callbacks/mem_callbacks.o 00:02:37.978 LINK spdk_nvme_discover 00:02:37.978 CC test/lvol/esnap/esnap.o 00:02:37.978 LINK reactor 00:02:37.978 LINK nvmf_tgt 00:02:37.978 LINK event_perf 00:02:37.978 LINK jsoncat 00:02:37.978 CXX test/cpp_headers/nvme_zns.o 00:02:37.978 LINK cmb_copy 00:02:37.978 LINK spdk_trace_record 00:02:37.978 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.978 LINK zipf 00:02:37.978 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.978 LINK lsvmd 00:02:37.978 LINK iscsi_tgt 00:02:37.978 LINK env_dpdk_post_init 00:02:37.978 LINK spdk_dd 00:02:37.978 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.978 CXX test/cpp_headers/nvmf.o 00:02:37.978 CXX test/cpp_headers/nvmf_spec.o 00:02:37.978 LINK startup 00:02:37.978 CXX test/cpp_headers/nvmf_transport.o 00:02:37.978 CXX test/cpp_headers/opal.o 00:02:37.978 LINK led 00:02:37.978 CXX test/cpp_headers/opal_spec.o 00:02:37.978 CXX test/cpp_headers/pci_ids.o 00:02:37.978 CXX test/cpp_headers/pipe.o 00:02:37.978 LINK ioat_perf 00:02:37.978 LINK doorbell_aers 00:02:37.978 CXX test/cpp_headers/queue.o 00:02:37.978 CXX test/cpp_headers/reduce.o 00:02:37.978 CXX test/cpp_headers/rpc.o 00:02:37.978 CXX test/cpp_headers/scheduler.o 00:02:37.978 CXX test/cpp_headers/scsi.o 00:02:37.978 CXX test/cpp_headers/scsi_spec.o 00:02:37.978 CXX test/cpp_headers/sock.o 00:02:37.978 CXX test/cpp_headers/stdinc.o 00:02:37.978 CXX test/cpp_headers/thread.o 00:02:37.978 CXX test/cpp_headers/string.o 00:02:37.978 CXX test/cpp_headers/trace.o 00:02:37.978 CXX test/cpp_headers/trace_parser.o 00:02:37.978 LINK hotplug 00:02:37.978 LINK hello_sock 00:02:37.978 CXX test/cpp_headers/tree.o 00:02:37.978 LINK pmr_persistence 00:02:37.978 CXX test/cpp_headers/ublk.o 00:02:37.978 CXX test/cpp_headers/util.o 00:02:37.978 CXX test/cpp_headers/uuid.o 00:02:37.978 LINK histogram_perf 00:02:37.978 CXX test/cpp_headers/version.o 00:02:37.978 CXX test/cpp_headers/vfio_user_pci.o 00:02:37.978 CXX test/cpp_headers/vfio_user_spec.o 00:02:37.978 LINK simple_copy 00:02:37.978 LINK bdev_svc 00:02:37.978 LINK vtophys 00:02:37.978 LINK reset 00:02:37.978 CXX test/cpp_headers/vhost.o 00:02:37.978 LINK reactor_perf 00:02:37.978 LINK thread 00:02:37.978 LINK boot_partition 00:02:38.236 CXX test/cpp_headers/vmd.o 00:02:38.236 CXX test/cpp_headers/xor.o 00:02:38.236 LINK app_repeat 00:02:38.236 LINK hello_bdev 00:02:38.236 CXX test/cpp_headers/zipf.o 00:02:38.236 LINK hello_world 00:02:38.236 LINK stub 00:02:38.236 LINK poller_perf 00:02:38.236 LINK err_injection 00:02:38.236 LINK fused_ordering 00:02:38.236 LINK arbitration 00:02:38.236 LINK connect_stress 00:02:38.236 LINK nvmf 00:02:38.236 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:38.236 LINK abort 00:02:38.236 LINK nvme_compliance 00:02:38.236 LINK spdk_trace 00:02:38.236 LINK mkfs 00:02:38.236 LINK verify 00:02:38.236 LINK sgl 00:02:38.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:38.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.236 LINK reserve 00:02:38.236 LINK test_dma 00:02:38.236 LINK aer 00:02:38.236 LINK nvme_dp 00:02:38.236 LINK overhead 00:02:38.236 LINK fdp 00:02:38.236 LINK hello_blob 00:02:38.236 LINK reconnect 00:02:38.236 LINK scheduler 00:02:38.494 LINK dif 00:02:38.494 LINK idxd_perf 00:02:38.494 LINK bdevio 00:02:38.494 LINK blobcli 00:02:38.494 LINK nvme_manage 00:02:38.494 LINK pci_ut 00:02:38.494 LINK accel_perf 00:02:38.494 LINK spdk_nvme 00:02:38.494 LINK spdk_bdev 00:02:38.494 LINK spdk_top 00:02:38.494 LINK mem_callbacks 00:02:38.754 LINK nvme_fuzz 00:02:38.754 LINK spdk_nvme_identify 00:02:38.754 LINK bdevperf 00:02:38.754 LINK spdk_nvme_perf 00:02:38.754 LINK vhost_fuzz 00:02:38.754 LINK memory_ut 00:02:39.324 LINK cuse 00:02:39.894 LINK iscsi_fuzz 00:02:41.804 LINK esnap 00:02:42.375 00:02:42.375 real 0m50.063s 00:02:42.375 user 6m31.382s 00:02:42.375 sys 4m52.284s 00:02:42.375 11:09:39 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:42.375 11:09:39 make -- common/autotest_common.sh@10 -- $ set +x 00:02:42.375 ************************************ 00:02:42.375 END TEST make 00:02:42.375 ************************************ 00:02:42.375 11:09:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:42.375 11:09:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:42.375 11:09:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:42.375 11:09:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.375 11:09:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:42.375 11:09:39 -- pm/common@44 -- $ pid=1217499 00:02:42.375 11:09:39 -- pm/common@50 -- $ kill -TERM 1217499 00:02:42.375 11:09:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.375 11:09:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:42.375 11:09:39 -- pm/common@44 -- $ pid=1217500 00:02:42.375 11:09:39 -- pm/common@50 -- $ kill -TERM 1217500 00:02:42.375 11:09:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.375 11:09:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:42.375 11:09:39 -- pm/common@44 -- $ pid=1217502 00:02:42.375 11:09:39 -- pm/common@50 -- $ kill -TERM 1217502 00:02:42.375 11:09:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.375 11:09:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:42.375 11:09:39 -- pm/common@44 -- $ pid=1217527 00:02:42.375 11:09:39 -- pm/common@50 -- $ sudo -E kill -TERM 1217527 00:02:42.636 11:09:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.636 11:09:39 -- nvmf/common.sh@7 -- # uname -s 00:02:42.636 11:09:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.636 11:09:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.636 11:09:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.636 11:09:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.636 11:09:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.636 11:09:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.636 11:09:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.636 11:09:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.636 11:09:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.636 11:09:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.636 11:09:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:42.636 11:09:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:42.636 11:09:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.636 11:09:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.636 11:09:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:42.636 11:09:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:42.636 11:09:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:42.636 11:09:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.636 11:09:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.636 11:09:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.636 11:09:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.636 11:09:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.636 11:09:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.636 11:09:39 -- paths/export.sh@5 -- # export PATH 00:02:42.636 11:09:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.636 11:09:39 -- nvmf/common.sh@47 -- # : 0 00:02:42.637 11:09:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:42.637 11:09:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:42.637 11:09:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:42.637 11:09:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.637 11:09:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.637 11:09:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:42.637 11:09:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:42.637 11:09:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:42.637 11:09:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.637 11:09:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.637 11:09:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.637 11:09:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.637 11:09:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.637 11:09:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.637 11:09:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.637 11:09:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.637 11:09:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.637 11:09:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.637 11:09:39 -- spdk/autotest.sh@48 -- # udevadm_pid=1278428 00:02:42.637 11:09:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:42.637 11:09:39 -- pm/common@17 -- # local monitor 00:02:42.637 11:09:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.637 11:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.637 11:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.637 11:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.637 11:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.637 11:09:39 -- pm/common@21 -- # date +%s 00:02:42.637 11:09:39 -- pm/common@25 -- # sleep 1 00:02:42.637 11:09:39 -- pm/common@21 -- # date +%s 00:02:42.637 11:09:39 -- pm/common@21 -- # date +%s 00:02:42.637 11:09:39 -- pm/common@21 -- # date +%s 00:02:42.637 11:09:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010579 00:02:42.637 11:09:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010579 00:02:42.637 11:09:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010579 00:02:42.637 11:09:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010579 00:02:42.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010579_collect-vmstat.pm.log 00:02:42.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010579_collect-cpu-load.pm.log 00:02:42.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010579_collect-cpu-temp.pm.log 00:02:42.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010579_collect-bmc-pm.bmc.pm.log 00:02:43.578 11:09:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.578 11:09:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:43.578 11:09:40 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:43.578 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.578 11:09:40 -- spdk/autotest.sh@59 -- # create_test_list 00:02:43.578 11:09:40 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:43.578 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.578 11:09:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:43.578 11:09:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.578 11:09:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.578 11:09:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:43.578 11:09:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.578 11:09:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:43.578 11:09:40 -- common/autotest_common.sh@1454 -- # uname 00:02:43.578 11:09:40 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:43.578 11:09:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:43.578 11:09:40 -- common/autotest_common.sh@1474 -- # uname 00:02:43.578 11:09:40 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:43.578 11:09:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:43.578 11:09:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:43.578 11:09:40 -- spdk/autotest.sh@72 -- # hash lcov 00:02:43.578 11:09:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:43.578 11:09:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:43.578 --rc lcov_branch_coverage=1 00:02:43.578 --rc lcov_function_coverage=1 00:02:43.578 --rc genhtml_branch_coverage=1 00:02:43.578 --rc genhtml_function_coverage=1 00:02:43.578 --rc genhtml_legend=1 00:02:43.578 --rc geninfo_all_blocks=1 00:02:43.578 ' 00:02:43.578 11:09:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:43.578 --rc lcov_branch_coverage=1 00:02:43.578 --rc lcov_function_coverage=1 00:02:43.578 --rc genhtml_branch_coverage=1 00:02:43.578 --rc genhtml_function_coverage=1 00:02:43.578 --rc genhtml_legend=1 00:02:43.578 --rc geninfo_all_blocks=1 00:02:43.578 ' 00:02:43.578 11:09:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:43.578 --rc lcov_branch_coverage=1 00:02:43.578 --rc lcov_function_coverage=1 00:02:43.578 --rc genhtml_branch_coverage=1 00:02:43.578 --rc genhtml_function_coverage=1 00:02:43.578 --rc genhtml_legend=1 00:02:43.578 --rc geninfo_all_blocks=1 00:02:43.578 --no-external' 00:02:43.578 11:09:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:43.578 --rc lcov_branch_coverage=1 00:02:43.578 --rc lcov_function_coverage=1 00:02:43.578 --rc genhtml_branch_coverage=1 00:02:43.578 --rc genhtml_function_coverage=1 00:02:43.578 --rc genhtml_legend=1 00:02:43.578 --rc geninfo_all_blocks=1 00:02:43.578 --no-external' 00:02:43.578 11:09:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:43.839 lcov: LCOV version 1.14 00:02:43.839 11:09:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:11.052 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:11.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:11.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:11.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:11.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:11.623 11:10:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.623 11:10:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:11.623 11:10:08 -- common/autotest_common.sh@10 -- # set +x 00:03:11.623 11:10:08 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.623 11:10:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.825 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:03:15.825 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:15.825 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:15.825 11:10:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:15.825 11:10:13 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:15.825 11:10:13 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:15.825 11:10:13 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:15.825 11:10:13 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:15.825 11:10:13 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:15.825 11:10:13 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:15.825 11:10:13 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.825 11:10:13 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:15.825 11:10:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:15.825 11:10:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:15.825 11:10:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:15.825 11:10:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:15.825 11:10:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:15.826 11:10:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:16.087 No valid GPT data, bailing 00:03:16.087 11:10:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.087 11:10:13 -- scripts/common.sh@391 -- # pt= 00:03:16.087 11:10:13 -- scripts/common.sh@392 -- # return 1 00:03:16.087 11:10:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:16.087 1+0 records in 00:03:16.087 1+0 records out 00:03:16.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541568 s, 194 MB/s 00:03:16.087 11:10:13 -- spdk/autotest.sh@118 -- # sync 00:03:16.087 11:10:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:16.087 11:10:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:16.087 11:10:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:24.222 11:10:20 -- spdk/autotest.sh@124 -- # uname -s 00:03:24.222 11:10:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:24.222 11:10:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.222 11:10:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.222 11:10:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.222 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.222 ************************************ 00:03:24.222 START TEST setup.sh 00:03:24.222 ************************************ 00:03:24.222 11:10:20 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.222 * Looking for test storage... 00:03:24.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.222 11:10:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:24.222 11:10:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:24.222 11:10:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:24.222 11:10:20 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.222 11:10:20 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.222 11:10:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.222 ************************************ 00:03:24.222 START TEST acl 00:03:24.222 ************************************ 00:03:24.222 11:10:20 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:24.222 * Looking for test storage... 00:03:24.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.223 11:10:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:24.223 11:10:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:24.223 11:10:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.223 11:10:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.559 11:10:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.559 11:10:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.559 11:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.559 11:10:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.559 11:10:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.559 11:10:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.856 Hugepages 00:03:31.856 node hugesize free / total 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 00:03:31.857 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 11:10:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:32.116 11:10:29 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:32.116 11:10:29 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.116 11:10:29 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.116 11:10:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.116 ************************************ 00:03:32.116 START TEST denied 00:03:32.116 ************************************ 00:03:32.116 11:10:29 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:32.116 11:10:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:32.116 11:10:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:32.116 11:10:29 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:32.116 11:10:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.116 11:10:29 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.318 0000:65:00.0 (8086 0a54): Skipping denied controller at 0000:65:00.0 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.318 11:10:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.603 00:03:41.603 real 0m9.315s 00:03:41.603 user 0m3.115s 00:03:41.603 sys 0m5.446s 00:03:41.603 11:10:38 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:41.603 11:10:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:41.603 ************************************ 00:03:41.603 END TEST denied 00:03:41.603 ************************************ 00:03:41.603 11:10:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.603 11:10:38 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:41.603 11:10:38 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:41.603 11:10:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.603 ************************************ 00:03:41.603 START TEST allowed 00:03:41.603 ************************************ 00:03:41.603 11:10:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:41.603 11:10:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:41.603 11:10:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:41.603 11:10:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:41.603 11:10:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.603 11:10:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.188 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.188 11:10:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:48.188 11:10:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:48.188 11:10:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:48.188 11:10:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.188 11:10:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.557 00:03:51.557 real 0m9.920s 00:03:51.557 user 0m2.866s 00:03:51.557 sys 0m5.206s 00:03:51.557 11:10:48 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:51.557 11:10:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:51.557 ************************************ 00:03:51.557 END TEST allowed 00:03:51.557 ************************************ 00:03:51.557 00:03:51.557 real 0m27.846s 00:03:51.557 user 0m9.213s 00:03:51.557 sys 0m16.270s 00:03:51.557 11:10:48 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:51.557 11:10:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.557 ************************************ 00:03:51.557 END TEST acl 00:03:51.557 ************************************ 00:03:51.557 11:10:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.557 11:10:48 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:51.557 11:10:48 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:51.557 11:10:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.557 ************************************ 00:03:51.557 START TEST hugepages 00:03:51.557 ************************************ 00:03:51.557 11:10:48 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.557 * Looking for test storage... 00:03:51.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 103469136 kB' 'MemAvailable: 106727924 kB' 'Buffers: 3736 kB' 'Cached: 14364892 kB' 'SwapCached: 0 kB' 'Active: 11391156 kB' 'Inactive: 3520652 kB' 'Active(anon): 10972212 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546808 kB' 'Mapped: 188512 kB' 'Shmem: 10429032 kB' 'KReclaimable: 287068 kB' 'Slab: 1022776 kB' 'SReclaimable: 287068 kB' 'SUnreclaim: 735708 kB' 'KernelStack: 25072 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69463464 kB' 'Committed_AS: 12504872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230396 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.557 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.558 11:10:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:51.558 11:10:48 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:51.558 11:10:48 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:51.558 11:10:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.558 ************************************ 00:03:51.558 START TEST default_setup 00:03:51.558 ************************************ 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.558 11:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.759 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.759 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:57.680 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105639876 kB' 'MemAvailable: 108898600 kB' 'Buffers: 3736 kB' 'Cached: 14365052 kB' 'SwapCached: 0 kB' 'Active: 11413276 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994332 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568680 kB' 'Mapped: 188840 kB' 'Shmem: 10429192 kB' 'KReclaimable: 286940 kB' 'Slab: 1020644 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733704 kB' 'KernelStack: 24960 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12527732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230140 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105642204 kB' 'MemAvailable: 108900928 kB' 'Buffers: 3736 kB' 'Cached: 14365052 kB' 'SwapCached: 0 kB' 'Active: 11414844 kB' 'Inactive: 3520652 kB' 'Active(anon): 10995900 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570228 kB' 'Mapped: 188908 kB' 'Shmem: 10429192 kB' 'KReclaimable: 286940 kB' 'Slab: 1020644 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733704 kB' 'KernelStack: 24912 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12528940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230124 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105636840 kB' 'MemAvailable: 108895564 kB' 'Buffers: 3736 kB' 'Cached: 14365072 kB' 'SwapCached: 0 kB' 'Active: 11417492 kB' 'Inactive: 3520652 kB' 'Active(anon): 10998548 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572792 kB' 'Mapped: 189232 kB' 'Shmem: 10429212 kB' 'KReclaimable: 286940 kB' 'Slab: 1020620 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733680 kB' 'KernelStack: 24944 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12532140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230096 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.684 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.685 nr_hugepages=1024 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.685 resv_hugepages=0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.685 surplus_hugepages=0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.685 anon_hugepages=0 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.685 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105636084 kB' 'MemAvailable: 108894808 kB' 'Buffers: 3736 kB' 'Cached: 14365092 kB' 'SwapCached: 0 kB' 'Active: 11412152 kB' 'Inactive: 3520652 kB' 'Active(anon): 10993208 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567424 kB' 'Mapped: 188788 kB' 'Shmem: 10429232 kB' 'KReclaimable: 286940 kB' 'Slab: 1020696 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733756 kB' 'KernelStack: 24992 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12526040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230124 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.686 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.687 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 58262016 kB' 'MemUsed: 7399984 kB' 'SwapCached: 0 kB' 'Active: 3774788 kB' 'Inactive: 152040 kB' 'Active(anon): 3673276 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3596944 kB' 'Mapped: 49432 kB' 'AnonPages: 333152 kB' 'Shmem: 3343392 kB' 'KernelStack: 12456 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 436024 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 329188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.688 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.689 node0=1024 expecting 1024 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.689 00:03:57.689 real 0m6.076s 00:03:57.689 user 0m1.531s 00:03:57.689 sys 0m2.744s 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.689 11:10:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:57.689 ************************************ 00:03:57.689 END TEST default_setup 00:03:57.689 ************************************ 00:03:57.689 11:10:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:57.689 11:10:54 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:57.689 11:10:54 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:57.689 11:10:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.689 ************************************ 00:03:57.689 START TEST per_node_1G_alloc 00:03:57.689 ************************************ 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.689 11:10:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.912 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.912 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.912 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.912 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.912 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.912 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.913 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105671768 kB' 'MemAvailable: 108930492 kB' 'Buffers: 3736 kB' 'Cached: 14365208 kB' 'SwapCached: 0 kB' 'Active: 11410768 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991824 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565272 kB' 'Mapped: 187628 kB' 'Shmem: 10429348 kB' 'KReclaimable: 286940 kB' 'Slab: 1020700 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733760 kB' 'KernelStack: 24960 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12513116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230172 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.913 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105672284 kB' 'MemAvailable: 108931008 kB' 'Buffers: 3736 kB' 'Cached: 14365212 kB' 'SwapCached: 0 kB' 'Active: 11410792 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991848 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565400 kB' 'Mapped: 187636 kB' 'Shmem: 10429352 kB' 'KReclaimable: 286940 kB' 'Slab: 1020660 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733720 kB' 'KernelStack: 24864 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12511516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230140 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.914 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.915 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105671652 kB' 'MemAvailable: 108930376 kB' 'Buffers: 3736 kB' 'Cached: 14365228 kB' 'SwapCached: 0 kB' 'Active: 11410536 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991592 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565492 kB' 'Mapped: 187536 kB' 'Shmem: 10429368 kB' 'KReclaimable: 286940 kB' 'Slab: 1020648 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733708 kB' 'KernelStack: 25152 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12513148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.916 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.917 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.918 nr_hugepages=1024 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.918 resv_hugepages=0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.918 surplus_hugepages=0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.918 anon_hugepages=0 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.918 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105669928 kB' 'MemAvailable: 108928652 kB' 'Buffers: 3736 kB' 'Cached: 14365252 kB' 'SwapCached: 0 kB' 'Active: 11410376 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991432 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565324 kB' 'Mapped: 187528 kB' 'Shmem: 10429392 kB' 'KReclaimable: 286940 kB' 'Slab: 1020648 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733708 kB' 'KernelStack: 25120 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12513172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230348 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.919 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.920 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59317568 kB' 'MemUsed: 6344432 kB' 'SwapCached: 0 kB' 'Active: 3773088 kB' 'Inactive: 152040 kB' 'Active(anon): 3671576 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597092 kB' 'Mapped: 49056 kB' 'AnonPages: 331272 kB' 'Shmem: 3343540 kB' 'KernelStack: 12440 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435780 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.921 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682028 kB' 'MemFree: 46351968 kB' 'MemUsed: 14330060 kB' 'SwapCached: 0 kB' 'Active: 7637080 kB' 'Inactive: 3368612 kB' 'Active(anon): 7319648 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 3368612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10771920 kB' 'Mapped: 138472 kB' 'AnonPages: 233808 kB' 'Shmem: 7085876 kB' 'KernelStack: 12664 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180104 kB' 'Slab: 584868 kB' 'SReclaimable: 180104 kB' 'SUnreclaim: 404764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.922 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.923 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.924 node0=512 expecting 512 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:01.924 node1=512 expecting 512 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.924 00:04:01.924 real 0m3.927s 00:04:01.924 user 0m1.481s 00:04:01.924 sys 0m2.482s 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:01.924 11:10:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.924 ************************************ 00:04:01.924 END TEST per_node_1G_alloc 00:04:01.924 ************************************ 00:04:01.924 11:10:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:01.924 11:10:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:01.924 11:10:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:01.924 11:10:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.924 ************************************ 00:04:01.924 START TEST even_2G_alloc 00:04:01.924 ************************************ 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.924 11:10:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.223 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.223 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.224 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.224 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.490 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105661536 kB' 'MemAvailable: 108920260 kB' 'Buffers: 3736 kB' 'Cached: 14365384 kB' 'SwapCached: 0 kB' 'Active: 11410116 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991172 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564404 kB' 'Mapped: 187560 kB' 'Shmem: 10429524 kB' 'KReclaimable: 286940 kB' 'Slab: 1020180 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733240 kB' 'KernelStack: 25120 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230364 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105661704 kB' 'MemAvailable: 108920428 kB' 'Buffers: 3736 kB' 'Cached: 14365384 kB' 'SwapCached: 0 kB' 'Active: 11410424 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991480 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564740 kB' 'Mapped: 187636 kB' 'Shmem: 10429524 kB' 'KReclaimable: 286940 kB' 'Slab: 1020196 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733256 kB' 'KernelStack: 25104 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230364 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105662716 kB' 'MemAvailable: 108921440 kB' 'Buffers: 3736 kB' 'Cached: 14365404 kB' 'SwapCached: 0 kB' 'Active: 11410080 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991136 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564320 kB' 'Mapped: 187636 kB' 'Shmem: 10429544 kB' 'KReclaimable: 286940 kB' 'Slab: 1020228 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733288 kB' 'KernelStack: 25088 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230380 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.495 nr_hugepages=1024 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.495 resv_hugepages=0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.495 surplus_hugepages=0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.495 anon_hugepages=0 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105663724 kB' 'MemAvailable: 108922448 kB' 'Buffers: 3736 kB' 'Cached: 14365404 kB' 'SwapCached: 0 kB' 'Active: 11409752 kB' 'Inactive: 3520652 kB' 'Active(anon): 10990808 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564460 kB' 'Mapped: 187556 kB' 'Shmem: 10429544 kB' 'KReclaimable: 286940 kB' 'Slab: 1020244 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733304 kB' 'KernelStack: 25136 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230380 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59323616 kB' 'MemUsed: 6338384 kB' 'SwapCached: 0 kB' 'Active: 3773004 kB' 'Inactive: 152040 kB' 'Active(anon): 3671492 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597212 kB' 'Mapped: 49072 kB' 'AnonPages: 331000 kB' 'Shmem: 3343660 kB' 'KernelStack: 12408 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435064 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682028 kB' 'MemFree: 46341272 kB' 'MemUsed: 14340756 kB' 'SwapCached: 0 kB' 'Active: 7636300 kB' 'Inactive: 3368612 kB' 'Active(anon): 7318868 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 3368612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10771968 kB' 'Mapped: 138484 kB' 'AnonPages: 233024 kB' 'Shmem: 7085924 kB' 'KernelStack: 12536 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180104 kB' 'Slab: 585140 kB' 'SReclaimable: 180104 kB' 'SUnreclaim: 405036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.498 node0=512 expecting 512 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.498 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.499 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:05.499 node1=512 expecting 512 00:04:05.499 11:11:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.499 00:04:05.499 real 0m3.807s 00:04:05.499 user 0m1.405s 00:04:05.499 sys 0m2.436s 00:04:05.499 11:11:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:05.499 11:11:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.499 ************************************ 00:04:05.499 END TEST even_2G_alloc 00:04:05.499 ************************************ 00:04:05.761 11:11:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:05.761 11:11:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:05.761 11:11:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:05.761 11:11:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.761 ************************************ 00:04:05.761 START TEST odd_alloc 00:04:05.761 ************************************ 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.761 11:11:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.970 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.970 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.970 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105651744 kB' 'MemAvailable: 108910468 kB' 'Buffers: 3736 kB' 'Cached: 14365572 kB' 'SwapCached: 0 kB' 'Active: 11410512 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991568 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564664 kB' 'Mapped: 187580 kB' 'Shmem: 10429712 kB' 'KReclaimable: 286940 kB' 'Slab: 1020564 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733624 kB' 'KernelStack: 24992 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511016 kB' 'Committed_AS: 12512352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105652248 kB' 'MemAvailable: 108910972 kB' 'Buffers: 3736 kB' 'Cached: 14365572 kB' 'SwapCached: 0 kB' 'Active: 11411224 kB' 'Inactive: 3520652 kB' 'Active(anon): 10992280 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565384 kB' 'Mapped: 187656 kB' 'Shmem: 10429712 kB' 'KReclaimable: 286940 kB' 'Slab: 1020596 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733656 kB' 'KernelStack: 24976 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511016 kB' 'Committed_AS: 12512372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105651288 kB' 'MemAvailable: 108910012 kB' 'Buffers: 3736 kB' 'Cached: 14365592 kB' 'SwapCached: 0 kB' 'Active: 11411048 kB' 'Inactive: 3520652 kB' 'Active(anon): 10992104 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565228 kB' 'Mapped: 187656 kB' 'Shmem: 10429732 kB' 'KReclaimable: 286940 kB' 'Slab: 1020564 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733624 kB' 'KernelStack: 24960 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511016 kB' 'Committed_AS: 12512392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.975 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:09.976 nr_hugepages=1025 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.976 resv_hugepages=0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.976 surplus_hugepages=0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.976 anon_hugepages=0 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105651904 kB' 'MemAvailable: 108910628 kB' 'Buffers: 3736 kB' 'Cached: 14365612 kB' 'SwapCached: 0 kB' 'Active: 11410892 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991948 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565540 kB' 'Mapped: 187580 kB' 'Shmem: 10429752 kB' 'KReclaimable: 286940 kB' 'Slab: 1020588 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733648 kB' 'KernelStack: 24960 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511016 kB' 'Committed_AS: 12512412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59319388 kB' 'MemUsed: 6342612 kB' 'SwapCached: 0 kB' 'Active: 3776036 kB' 'Inactive: 152040 kB' 'Active(anon): 3674524 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597396 kB' 'Mapped: 49100 kB' 'AnonPages: 333988 kB' 'Shmem: 3343844 kB' 'KernelStack: 12504 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435164 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682028 kB' 'MemFree: 46332632 kB' 'MemUsed: 14349396 kB' 'SwapCached: 0 kB' 'Active: 7635072 kB' 'Inactive: 3368612 kB' 'Active(anon): 7317640 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 3368612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10771972 kB' 'Mapped: 138480 kB' 'AnonPages: 231732 kB' 'Shmem: 7085928 kB' 'KernelStack: 12472 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180104 kB' 'Slab: 585424 kB' 'SReclaimable: 180104 kB' 'SUnreclaim: 405320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.979 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.980 11:11:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:09.981 node0=512 expecting 513 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:09.981 node1=513 expecting 512 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:09.981 00:04:09.981 real 0m3.994s 00:04:09.981 user 0m1.565s 00:04:09.981 sys 0m2.499s 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.981 11:11:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.981 ************************************ 00:04:09.981 END TEST odd_alloc 00:04:09.981 ************************************ 00:04:09.981 11:11:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:09.981 11:11:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:09.981 11:11:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.981 11:11:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.981 ************************************ 00:04:09.981 START TEST custom_alloc 00:04:09.981 ************************************ 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.981 11:11:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.188 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.188 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 104614316 kB' 'MemAvailable: 107873040 kB' 'Buffers: 3736 kB' 'Cached: 14365748 kB' 'SwapCached: 0 kB' 'Active: 11412456 kB' 'Inactive: 3520652 kB' 'Active(anon): 10993512 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566432 kB' 'Mapped: 187740 kB' 'Shmem: 10429888 kB' 'KReclaimable: 286940 kB' 'Slab: 1021104 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734164 kB' 'KernelStack: 24960 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987752 kB' 'Committed_AS: 12513172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230188 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.188 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.189 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 104614064 kB' 'MemAvailable: 107872788 kB' 'Buffers: 3736 kB' 'Cached: 14365748 kB' 'SwapCached: 0 kB' 'Active: 11412828 kB' 'Inactive: 3520652 kB' 'Active(anon): 10993884 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566812 kB' 'Mapped: 187740 kB' 'Shmem: 10429888 kB' 'KReclaimable: 286940 kB' 'Slab: 1021124 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734184 kB' 'KernelStack: 24960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987752 kB' 'Committed_AS: 12513192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230140 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.191 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 104614280 kB' 'MemAvailable: 107873004 kB' 'Buffers: 3736 kB' 'Cached: 14365760 kB' 'SwapCached: 0 kB' 'Active: 11410872 kB' 'Inactive: 3520652 kB' 'Active(anon): 10991928 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565308 kB' 'Mapped: 187596 kB' 'Shmem: 10429900 kB' 'KReclaimable: 286940 kB' 'Slab: 1021116 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734176 kB' 'KernelStack: 24960 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987752 kB' 'Committed_AS: 12513212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230140 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.192 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.193 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:14.194 nr_hugepages=1536 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.194 resv_hugepages=0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.194 surplus_hugepages=0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.194 anon_hugepages=0 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 104613892 kB' 'MemAvailable: 107872616 kB' 'Buffers: 3736 kB' 'Cached: 14365788 kB' 'SwapCached: 0 kB' 'Active: 11411248 kB' 'Inactive: 3520652 kB' 'Active(anon): 10992304 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565664 kB' 'Mapped: 187596 kB' 'Shmem: 10429928 kB' 'KReclaimable: 286940 kB' 'Slab: 1021116 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734176 kB' 'KernelStack: 24960 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987752 kB' 'Committed_AS: 12513232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230140 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.194 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.195 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59332172 kB' 'MemUsed: 6329828 kB' 'SwapCached: 0 kB' 'Active: 3777564 kB' 'Inactive: 152040 kB' 'Active(anon): 3676052 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597560 kB' 'Mapped: 49124 kB' 'AnonPages: 335316 kB' 'Shmem: 3344008 kB' 'KernelStack: 12488 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435548 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.196 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682028 kB' 'MemFree: 45281468 kB' 'MemUsed: 15400560 kB' 'SwapCached: 0 kB' 'Active: 7634024 kB' 'Inactive: 3368612 kB' 'Active(anon): 7316592 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 3368612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10771984 kB' 'Mapped: 138472 kB' 'AnonPages: 230676 kB' 'Shmem: 7085940 kB' 'KernelStack: 12472 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180104 kB' 'Slab: 585568 kB' 'SReclaimable: 180104 kB' 'SUnreclaim: 405464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.197 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.198 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.199 node0=512 expecting 512 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:14.199 node1=1024 expecting 1024 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:14.199 00:04:14.199 real 0m4.077s 00:04:14.199 user 0m1.593s 00:04:14.199 sys 0m2.557s 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.199 11:11:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.199 ************************************ 00:04:14.199 END TEST custom_alloc 00:04:14.199 ************************************ 00:04:14.199 11:11:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.199 11:11:10 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.199 11:11:10 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.199 11:11:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.199 ************************************ 00:04:14.199 START TEST no_shrink_alloc 00:04:14.199 ************************************ 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.199 11:11:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.407 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.407 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105567824 kB' 'MemAvailable: 108826548 kB' 'Buffers: 3736 kB' 'Cached: 14365936 kB' 'SwapCached: 0 kB' 'Active: 11413012 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994068 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566804 kB' 'Mapped: 187640 kB' 'Shmem: 10430076 kB' 'KReclaimable: 286940 kB' 'Slab: 1021072 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734132 kB' 'KernelStack: 24992 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514292 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105570776 kB' 'MemAvailable: 108829500 kB' 'Buffers: 3736 kB' 'Cached: 14365936 kB' 'SwapCached: 0 kB' 'Active: 11413768 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994824 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567592 kB' 'Mapped: 187720 kB' 'Shmem: 10430076 kB' 'KReclaimable: 286940 kB' 'Slab: 1021128 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734188 kB' 'KernelStack: 25008 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.409 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.410 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105570904 kB' 'MemAvailable: 108829628 kB' 'Buffers: 3736 kB' 'Cached: 14365956 kB' 'SwapCached: 0 kB' 'Active: 11413040 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994096 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566796 kB' 'Mapped: 187720 kB' 'Shmem: 10430096 kB' 'KReclaimable: 286940 kB' 'Slab: 1021128 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734188 kB' 'KernelStack: 24944 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.411 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.412 nr_hugepages=1024 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.412 resv_hugepages=0 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.412 surplus_hugepages=0 00:04:18.412 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.412 anon_hugepages=0 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105570904 kB' 'MemAvailable: 108829628 kB' 'Buffers: 3736 kB' 'Cached: 14365956 kB' 'SwapCached: 0 kB' 'Active: 11413700 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994756 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567508 kB' 'Mapped: 187720 kB' 'Shmem: 10430096 kB' 'KReclaimable: 286940 kB' 'Slab: 1021128 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 734188 kB' 'KernelStack: 24960 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12514352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.413 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.414 11:11:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.414 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.414 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.414 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.414 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.414 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 58269560 kB' 'MemUsed: 7392440 kB' 'SwapCached: 0 kB' 'Active: 3778336 kB' 'Inactive: 152040 kB' 'Active(anon): 3676824 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597696 kB' 'Mapped: 49140 kB' 'AnonPages: 335976 kB' 'Shmem: 3344144 kB' 'KernelStack: 12488 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435444 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.415 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:18.416 node0=1024 expecting 1024 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.416 11:11:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.715 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.715 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.715 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.985 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105607228 kB' 'MemAvailable: 108865952 kB' 'Buffers: 3736 kB' 'Cached: 14366092 kB' 'SwapCached: 0 kB' 'Active: 11413524 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994580 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567636 kB' 'Mapped: 187928 kB' 'Shmem: 10430232 kB' 'KReclaimable: 286940 kB' 'Slab: 1020556 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733616 kB' 'KernelStack: 24928 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12516532 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230220 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.986 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105606988 kB' 'MemAvailable: 108865712 kB' 'Buffers: 3736 kB' 'Cached: 14366096 kB' 'SwapCached: 0 kB' 'Active: 11414684 kB' 'Inactive: 3520652 kB' 'Active(anon): 10995740 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568372 kB' 'Mapped: 187744 kB' 'Shmem: 10430236 kB' 'KReclaimable: 286940 kB' 'Slab: 1020620 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733680 kB' 'KernelStack: 24976 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12518160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230204 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.987 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.988 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105608060 kB' 'MemAvailable: 108866784 kB' 'Buffers: 3736 kB' 'Cached: 14366116 kB' 'SwapCached: 0 kB' 'Active: 11413324 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994380 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567452 kB' 'Mapped: 187636 kB' 'Shmem: 10430256 kB' 'KReclaimable: 286940 kB' 'Slab: 1020616 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733676 kB' 'KernelStack: 24896 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12516572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230172 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.989 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.990 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.991 nr_hugepages=1024 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.991 resv_hugepages=0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.991 surplus_hugepages=0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.991 anon_hugepages=0 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344028 kB' 'MemFree: 105609868 kB' 'MemAvailable: 108868592 kB' 'Buffers: 3736 kB' 'Cached: 14366136 kB' 'SwapCached: 0 kB' 'Active: 11413860 kB' 'Inactive: 3520652 kB' 'Active(anon): 10994916 kB' 'Inactive(anon): 0 kB' 'Active(file): 418944 kB' 'Inactive(file): 3520652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567904 kB' 'Mapped: 187636 kB' 'Shmem: 10430276 kB' 'KReclaimable: 286940 kB' 'Slab: 1020616 kB' 'SReclaimable: 286940 kB' 'SUnreclaim: 733676 kB' 'KernelStack: 24960 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512040 kB' 'Committed_AS: 12518204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 103424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3287332 kB' 'DirectMap2M: 23656448 kB' 'DirectMap1G: 109051904 kB' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.991 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.992 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 58270140 kB' 'MemUsed: 7391860 kB' 'SwapCached: 0 kB' 'Active: 3779812 kB' 'Inactive: 152040 kB' 'Active(anon): 3678300 kB' 'Inactive(anon): 0 kB' 'Active(file): 101512 kB' 'Inactive(file): 152040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3597892 kB' 'Mapped: 49164 kB' 'AnonPages: 337136 kB' 'Shmem: 3344340 kB' 'KernelStack: 12600 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106836 kB' 'Slab: 435200 kB' 'SReclaimable: 106836 kB' 'SUnreclaim: 328364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.993 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.994 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.995 node0=1024 expecting 1024 00:04:21.995 11:11:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.995 00:04:21.995 real 0m8.096s 00:04:21.995 user 0m3.190s 00:04:21.995 sys 0m5.049s 00:04:21.995 11:11:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:21.995 11:11:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.995 ************************************ 00:04:21.995 END TEST no_shrink_alloc 00:04:21.995 ************************************ 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.995 11:11:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.995 00:04:21.995 real 0m30.556s 00:04:21.995 user 0m10.976s 00:04:21.995 sys 0m18.169s 00:04:21.995 11:11:19 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:21.995 11:11:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.995 ************************************ 00:04:21.995 END TEST hugepages 00:04:21.995 ************************************ 00:04:21.995 11:11:19 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:21.995 11:11:19 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.995 11:11:19 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.995 11:11:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.354 ************************************ 00:04:22.354 START TEST driver 00:04:22.354 ************************************ 00:04:22.354 11:11:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:22.354 * Looking for test storage... 00:04:22.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.354 11:11:19 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:22.354 11:11:19 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.354 11:11:19 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.641 11:11:24 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:27.641 11:11:24 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:27.641 11:11:24 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.641 11:11:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.641 ************************************ 00:04:27.641 START TEST guess_driver 00:04:27.641 ************************************ 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:27.641 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:27.641 Looking for driver=vfio-pci 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.641 11:11:24 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.851 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.852 11:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.241 11:11:30 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.537 00:04:38.537 real 0m11.123s 00:04:38.537 user 0m3.087s 00:04:38.537 sys 0m5.415s 00:04:38.537 11:11:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:38.537 11:11:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.537 ************************************ 00:04:38.537 END TEST guess_driver 00:04:38.537 ************************************ 00:04:38.537 00:04:38.537 real 0m16.392s 00:04:38.537 user 0m4.688s 00:04:38.537 sys 0m8.251s 00:04:38.537 11:11:35 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:38.537 11:11:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.537 ************************************ 00:04:38.537 END TEST driver 00:04:38.537 ************************************ 00:04:38.537 11:11:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.537 11:11:35 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.537 11:11:35 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.537 11:11:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.537 ************************************ 00:04:38.537 START TEST devices 00:04:38.537 ************************************ 00:04:38.537 11:11:35 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.537 * Looking for test storage... 00:04:38.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.537 11:11:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.537 11:11:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.537 11:11:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.537 11:11:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:43.825 11:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:43.825 No valid GPT data, bailing 00:04:43.825 11:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.825 11:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:43.825 11:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.825 11:11:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.825 ************************************ 00:04:43.825 START TEST nvme_mount 00:04:43.825 ************************************ 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.825 11:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:44.085 Creating new GPT entries in memory. 00:04:44.085 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.085 other utilities. 00:04:44.085 11:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.085 11:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.085 11:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.085 11:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.085 11:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.028 Creating new GPT entries in memory. 00:04:45.028 The operation has completed successfully. 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1318114 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.028 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.288 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.289 11:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.493 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.494 11:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.494 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.494 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.494 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.494 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.494 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.494 11:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.696 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.697 11:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.995 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.995 00:04:56.995 real 0m14.034s 00:04:56.995 user 0m4.279s 00:04:56.995 sys 0m7.637s 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.995 11:11:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.995 ************************************ 00:04:56.995 END TEST nvme_mount 00:04:56.995 ************************************ 00:04:56.995 11:11:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.995 11:11:54 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.995 11:11:54 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.995 11:11:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.255 ************************************ 00:04:57.255 START TEST dm_mount 00:04:57.255 ************************************ 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.255 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.256 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.256 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.256 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.256 11:11:54 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:58.196 Creating new GPT entries in memory. 00:04:58.196 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.196 other utilities. 00:04:58.196 11:11:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.196 11:11:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.196 11:11:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.196 11:11:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.196 11:11:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:59.137 Creating new GPT entries in memory. 00:04:59.137 The operation has completed successfully. 00:04:59.137 11:11:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.137 11:11:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.137 11:11:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.137 11:11:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.137 11:11:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:00.077 The operation has completed successfully. 00:05:00.077 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.077 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.077 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1323179 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.338 11:11:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:03.696 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.697 11:12:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.996 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.997 11:12:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:06.997 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:06.997 00:05:06.997 real 0m9.891s 00:05:06.997 user 0m2.138s 00:05:06.997 sys 0m4.604s 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.997 11:12:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:06.997 ************************************ 00:05:06.997 END TEST dm_mount 00:05:06.997 ************************************ 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.997 11:12:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.257 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:07.257 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:07.257 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.257 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.257 11:12:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:07.257 00:05:07.257 real 0m28.788s 00:05:07.257 user 0m8.164s 00:05:07.257 sys 0m15.243s 00:05:07.257 11:12:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.257 11:12:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:07.257 ************************************ 00:05:07.257 END TEST devices 00:05:07.257 ************************************ 00:05:07.518 00:05:07.518 real 1m43.971s 00:05:07.518 user 0m33.205s 00:05:07.518 sys 0m58.186s 00:05:07.518 11:12:04 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.518 11:12:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 END TEST setup.sh 00:05:07.518 ************************************ 00:05:07.518 11:12:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:11.727 Hugepages 00:05:11.727 node hugesize free / total 00:05:11.727 node0 1048576kB 0 / 0 00:05:11.727 node0 2048kB 2048 / 2048 00:05:11.727 node1 1048576kB 0 / 0 00:05:11.727 node1 2048kB 0 / 0 00:05:11.727 00:05:11.727 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.727 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:11.727 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:11.727 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:11.727 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:11.727 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:11.727 11:12:08 -- spdk/autotest.sh@130 -- # uname -s 00:05:11.727 11:12:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:11.727 11:12:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:11.727 11:12:08 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.029 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.029 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.940 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.940 11:12:14 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:18.324 11:12:15 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:18.324 11:12:15 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:18.324 11:12:15 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.324 11:12:15 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:18.324 11:12:15 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:18.324 11:12:15 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:18.324 11:12:15 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.324 11:12:15 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.324 11:12:15 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:18.324 11:12:15 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:18.324 11:12:15 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:18.324 11:12:15 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.656 Waiting for block devices as requested 00:05:21.916 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:21.916 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:21.916 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:22.177 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:22.178 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:22.178 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:22.439 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:22.439 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:22.439 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:05:22.700 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:22.700 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:22.700 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:22.961 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:22.961 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:22.961 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:22.961 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:23.222 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:23.222 11:12:20 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:23.222 11:12:20 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:23.222 11:12:20 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:23.222 11:12:20 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:05:23.223 11:12:20 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:23.223 11:12:20 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:23.223 11:12:20 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:23.223 11:12:20 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:23.223 11:12:20 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:05:23.223 11:12:20 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:23.223 11:12:20 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:23.223 11:12:20 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:23.223 11:12:20 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:23.223 11:12:20 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:23.223 11:12:20 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:23.223 11:12:20 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:23.223 11:12:20 -- common/autotest_common.sh@1556 -- # continue 00:05:23.223 11:12:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:23.223 11:12:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:23.223 11:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.223 11:12:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:23.223 11:12:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:23.223 11:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.223 11:12:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.429 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:27.430 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:28.813 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.813 11:12:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.813 11:12:26 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:28.813 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.075 11:12:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:29.075 11:12:26 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:29.075 11:12:26 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.075 11:12:26 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:29.075 11:12:26 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:29.075 11:12:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:29.075 11:12:26 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:29.075 11:12:26 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:29.075 11:12:26 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.075 11:12:26 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.075 11:12:26 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:29.075 11:12:26 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:29.075 11:12:26 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:29.075 11:12:26 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:29.075 11:12:26 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:29.075 11:12:26 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:29.075 11:12:26 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.075 11:12:26 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:29.075 11:12:26 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:65:00.0 00:05:29.075 11:12:26 -- common/autotest_common.sh@1591 -- # [[ -z 0000:65:00.0 ]] 00:05:29.075 11:12:26 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1334651 00:05:29.075 11:12:26 -- common/autotest_common.sh@1597 -- # waitforlisten 1334651 00:05:29.075 11:12:26 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.075 11:12:26 -- common/autotest_common.sh@830 -- # '[' -z 1334651 ']' 00:05:29.075 11:12:26 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.075 11:12:26 -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:29.075 11:12:26 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.075 11:12:26 -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:29.075 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.075 [2024-06-10 11:12:26.230052] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:05:29.075 [2024-06-10 11:12:26.230116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334651 ] 00:05:29.075 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.336 [2024-06-10 11:12:26.315865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.336 [2024-06-10 11:12:26.408812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.907 11:12:27 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.907 11:12:27 -- common/autotest_common.sh@863 -- # return 0 00:05:29.907 11:12:27 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:05:29.907 11:12:27 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:29.907 11:12:27 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:65:00.0 00:05:33.205 nvme0n1 00:05:33.205 11:12:30 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.205 [2024-06-10 11:12:30.305683] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:33.205 request: 00:05:33.205 { 00:05:33.205 "nvme_ctrlr_name": "nvme0", 00:05:33.205 "password": "test", 00:05:33.205 "method": "bdev_nvme_opal_revert", 00:05:33.205 "req_id": 1 00:05:33.205 } 00:05:33.205 Got JSON-RPC error response 00:05:33.205 response: 00:05:33.205 { 00:05:33.205 "code": -32602, 00:05:33.205 "message": "Invalid parameters" 00:05:33.205 } 00:05:33.205 11:12:30 -- common/autotest_common.sh@1603 -- # true 00:05:33.205 11:12:30 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:33.205 11:12:30 -- common/autotest_common.sh@1607 -- # killprocess 1334651 00:05:33.205 11:12:30 -- common/autotest_common.sh@949 -- # '[' -z 1334651 ']' 00:05:33.205 11:12:30 -- common/autotest_common.sh@953 -- # kill -0 1334651 00:05:33.205 11:12:30 -- common/autotest_common.sh@954 -- # uname 00:05:33.205 11:12:30 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:33.205 11:12:30 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1334651 00:05:33.205 11:12:30 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:33.205 11:12:30 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:33.205 11:12:30 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1334651' 00:05:33.205 killing process with pid 1334651 00:05:33.205 11:12:30 -- common/autotest_common.sh@968 -- # kill 1334651 00:05:33.205 11:12:30 -- common/autotest_common.sh@973 -- # wait 1334651 00:05:35.742 11:12:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:35.742 11:12:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:35.742 11:12:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.742 11:12:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.742 11:12:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:35.742 11:12:32 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.742 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.742 11:12:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:35.742 11:12:32 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.742 11:12:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.742 11:12:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.742 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.742 ************************************ 00:05:35.742 START TEST env 00:05:35.742 ************************************ 00:05:35.742 11:12:32 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.003 * Looking for test storage... 00:05:36.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:36.003 11:12:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.003 11:12:33 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.003 11:12:33 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.003 11:12:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.003 ************************************ 00:05:36.003 START TEST env_memory 00:05:36.003 ************************************ 00:05:36.003 11:12:33 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.003 00:05:36.003 00:05:36.003 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.003 http://cunit.sourceforge.net/ 00:05:36.003 00:05:36.003 00:05:36.003 Suite: memory 00:05:36.003 Test: alloc and free memory map ...[2024-06-10 11:12:33.106798] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.003 passed 00:05:36.003 Test: mem map translation ...[2024-06-10 11:12:33.130266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.003 [2024-06-10 11:12:33.130290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.003 [2024-06-10 11:12:33.130333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.003 [2024-06-10 11:12:33.130340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.003 passed 00:05:36.003 Test: mem map registration ...[2024-06-10 11:12:33.181272] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:36.003 [2024-06-10 11:12:33.181295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:36.003 passed 00:05:36.270 Test: mem map adjacent registrations ...passed 00:05:36.270 00:05:36.270 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.270 suites 1 1 n/a 0 0 00:05:36.270 tests 4 4 4 0 0 00:05:36.270 asserts 152 152 152 0 n/a 00:05:36.270 00:05:36.270 Elapsed time = 0.181 seconds 00:05:36.270 00:05:36.270 real 0m0.194s 00:05:36.270 user 0m0.184s 00:05:36.270 sys 0m0.009s 00:05:36.270 11:12:33 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.270 11:12:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.270 ************************************ 00:05:36.270 END TEST env_memory 00:05:36.270 ************************************ 00:05:36.270 11:12:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.270 11:12:33 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.270 11:12:33 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.270 11:12:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.270 ************************************ 00:05:36.270 START TEST env_vtophys 00:05:36.270 ************************************ 00:05:36.270 11:12:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.270 EAL: lib.eal log level changed from notice to debug 00:05:36.270 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.270 EAL: Detected lcore 1 as core 1 on socket 0 00:05:36.270 EAL: Detected lcore 2 as core 2 on socket 0 00:05:36.270 EAL: Detected lcore 3 as core 3 on socket 0 00:05:36.270 EAL: Detected lcore 4 as core 4 on socket 0 00:05:36.270 EAL: Detected lcore 5 as core 5 on socket 0 00:05:36.270 EAL: Detected lcore 6 as core 6 on socket 0 00:05:36.270 EAL: Detected lcore 7 as core 7 on socket 0 00:05:36.270 EAL: Detected lcore 8 as core 8 on socket 0 00:05:36.270 EAL: Detected lcore 9 as core 9 on socket 0 00:05:36.270 EAL: Detected lcore 10 as core 10 on socket 0 00:05:36.270 EAL: Detected lcore 11 as core 11 on socket 0 00:05:36.270 EAL: Detected lcore 12 as core 12 on socket 0 00:05:36.270 EAL: Detected lcore 13 as core 13 on socket 0 00:05:36.270 EAL: Detected lcore 14 as core 14 on socket 0 00:05:36.270 EAL: Detected lcore 15 as core 15 on socket 0 00:05:36.271 EAL: Detected lcore 16 as core 16 on socket 0 00:05:36.271 EAL: Detected lcore 17 as core 17 on socket 0 00:05:36.271 EAL: Detected lcore 18 as core 18 on socket 0 00:05:36.271 EAL: Detected lcore 19 as core 19 on socket 0 00:05:36.271 EAL: Detected lcore 20 as core 20 on socket 0 00:05:36.271 EAL: Detected lcore 21 as core 21 on socket 0 00:05:36.271 EAL: Detected lcore 22 as core 22 on socket 0 00:05:36.271 EAL: Detected lcore 23 as core 23 on socket 0 00:05:36.271 EAL: Detected lcore 24 as core 24 on socket 0 00:05:36.271 EAL: Detected lcore 25 as core 25 on socket 0 00:05:36.271 EAL: Detected lcore 26 as core 26 on socket 0 00:05:36.271 EAL: Detected lcore 27 as core 27 on socket 0 00:05:36.271 EAL: Detected lcore 28 as core 28 on socket 0 00:05:36.271 EAL: Detected lcore 29 as core 29 on socket 0 00:05:36.271 EAL: Detected lcore 30 as core 30 on socket 0 00:05:36.271 EAL: Detected lcore 31 as core 31 on socket 0 00:05:36.271 EAL: Detected lcore 32 as core 0 on socket 1 00:05:36.271 EAL: Detected lcore 33 as core 1 on socket 1 00:05:36.271 EAL: Detected lcore 34 as core 2 on socket 1 00:05:36.271 EAL: Detected lcore 35 as core 3 on socket 1 00:05:36.271 EAL: Detected lcore 36 as core 4 on socket 1 00:05:36.271 EAL: Detected lcore 37 as core 5 on socket 1 00:05:36.271 EAL: Detected lcore 38 as core 6 on socket 1 00:05:36.271 EAL: Detected lcore 39 as core 7 on socket 1 00:05:36.271 EAL: Detected lcore 40 as core 8 on socket 1 00:05:36.271 EAL: Detected lcore 41 as core 9 on socket 1 00:05:36.271 EAL: Detected lcore 42 as core 10 on socket 1 00:05:36.271 EAL: Detected lcore 43 as core 11 on socket 1 00:05:36.271 EAL: Detected lcore 44 as core 12 on socket 1 00:05:36.271 EAL: Detected lcore 45 as core 13 on socket 1 00:05:36.271 EAL: Detected lcore 46 as core 14 on socket 1 00:05:36.271 EAL: Detected lcore 47 as core 15 on socket 1 00:05:36.271 EAL: Detected lcore 48 as core 16 on socket 1 00:05:36.271 EAL: Detected lcore 49 as core 17 on socket 1 00:05:36.271 EAL: Detected lcore 50 as core 18 on socket 1 00:05:36.271 EAL: Detected lcore 51 as core 19 on socket 1 00:05:36.271 EAL: Detected lcore 52 as core 20 on socket 1 00:05:36.271 EAL: Detected lcore 53 as core 21 on socket 1 00:05:36.271 EAL: Detected lcore 54 as core 22 on socket 1 00:05:36.271 EAL: Detected lcore 55 as core 23 on socket 1 00:05:36.271 EAL: Detected lcore 56 as core 24 on socket 1 00:05:36.271 EAL: Detected lcore 57 as core 25 on socket 1 00:05:36.271 EAL: Detected lcore 58 as core 26 on socket 1 00:05:36.271 EAL: Detected lcore 59 as core 27 on socket 1 00:05:36.271 EAL: Detected lcore 60 as core 28 on socket 1 00:05:36.271 EAL: Detected lcore 61 as core 29 on socket 1 00:05:36.271 EAL: Detected lcore 62 as core 30 on socket 1 00:05:36.271 EAL: Detected lcore 63 as core 31 on socket 1 00:05:36.271 EAL: Detected lcore 64 as core 0 on socket 0 00:05:36.271 EAL: Detected lcore 65 as core 1 on socket 0 00:05:36.271 EAL: Detected lcore 66 as core 2 on socket 0 00:05:36.271 EAL: Detected lcore 67 as core 3 on socket 0 00:05:36.271 EAL: Detected lcore 68 as core 4 on socket 0 00:05:36.271 EAL: Detected lcore 69 as core 5 on socket 0 00:05:36.271 EAL: Detected lcore 70 as core 6 on socket 0 00:05:36.271 EAL: Detected lcore 71 as core 7 on socket 0 00:05:36.271 EAL: Detected lcore 72 as core 8 on socket 0 00:05:36.271 EAL: Detected lcore 73 as core 9 on socket 0 00:05:36.271 EAL: Detected lcore 74 as core 10 on socket 0 00:05:36.271 EAL: Detected lcore 75 as core 11 on socket 0 00:05:36.271 EAL: Detected lcore 76 as core 12 on socket 0 00:05:36.271 EAL: Detected lcore 77 as core 13 on socket 0 00:05:36.271 EAL: Detected lcore 78 as core 14 on socket 0 00:05:36.271 EAL: Detected lcore 79 as core 15 on socket 0 00:05:36.271 EAL: Detected lcore 80 as core 16 on socket 0 00:05:36.271 EAL: Detected lcore 81 as core 17 on socket 0 00:05:36.271 EAL: Detected lcore 82 as core 18 on socket 0 00:05:36.271 EAL: Detected lcore 83 as core 19 on socket 0 00:05:36.271 EAL: Detected lcore 84 as core 20 on socket 0 00:05:36.271 EAL: Detected lcore 85 as core 21 on socket 0 00:05:36.271 EAL: Detected lcore 86 as core 22 on socket 0 00:05:36.271 EAL: Detected lcore 87 as core 23 on socket 0 00:05:36.271 EAL: Detected lcore 88 as core 24 on socket 0 00:05:36.271 EAL: Detected lcore 89 as core 25 on socket 0 00:05:36.271 EAL: Detected lcore 90 as core 26 on socket 0 00:05:36.271 EAL: Detected lcore 91 as core 27 on socket 0 00:05:36.271 EAL: Detected lcore 92 as core 28 on socket 0 00:05:36.271 EAL: Detected lcore 93 as core 29 on socket 0 00:05:36.271 EAL: Detected lcore 94 as core 30 on socket 0 00:05:36.271 EAL: Detected lcore 95 as core 31 on socket 0 00:05:36.271 EAL: Detected lcore 96 as core 0 on socket 1 00:05:36.271 EAL: Detected lcore 97 as core 1 on socket 1 00:05:36.271 EAL: Detected lcore 98 as core 2 on socket 1 00:05:36.271 EAL: Detected lcore 99 as core 3 on socket 1 00:05:36.271 EAL: Detected lcore 100 as core 4 on socket 1 00:05:36.271 EAL: Detected lcore 101 as core 5 on socket 1 00:05:36.271 EAL: Detected lcore 102 as core 6 on socket 1 00:05:36.271 EAL: Detected lcore 103 as core 7 on socket 1 00:05:36.271 EAL: Detected lcore 104 as core 8 on socket 1 00:05:36.271 EAL: Detected lcore 105 as core 9 on socket 1 00:05:36.271 EAL: Detected lcore 106 as core 10 on socket 1 00:05:36.271 EAL: Detected lcore 107 as core 11 on socket 1 00:05:36.271 EAL: Detected lcore 108 as core 12 on socket 1 00:05:36.271 EAL: Detected lcore 109 as core 13 on socket 1 00:05:36.271 EAL: Detected lcore 110 as core 14 on socket 1 00:05:36.271 EAL: Detected lcore 111 as core 15 on socket 1 00:05:36.271 EAL: Detected lcore 112 as core 16 on socket 1 00:05:36.271 EAL: Detected lcore 113 as core 17 on socket 1 00:05:36.271 EAL: Detected lcore 114 as core 18 on socket 1 00:05:36.271 EAL: Detected lcore 115 as core 19 on socket 1 00:05:36.271 EAL: Detected lcore 116 as core 20 on socket 1 00:05:36.271 EAL: Detected lcore 117 as core 21 on socket 1 00:05:36.271 EAL: Detected lcore 118 as core 22 on socket 1 00:05:36.271 EAL: Detected lcore 119 as core 23 on socket 1 00:05:36.271 EAL: Detected lcore 120 as core 24 on socket 1 00:05:36.271 EAL: Detected lcore 121 as core 25 on socket 1 00:05:36.271 EAL: Detected lcore 122 as core 26 on socket 1 00:05:36.271 EAL: Detected lcore 123 as core 27 on socket 1 00:05:36.271 EAL: Detected lcore 124 as core 28 on socket 1 00:05:36.271 EAL: Detected lcore 125 as core 29 on socket 1 00:05:36.271 EAL: Detected lcore 126 as core 30 on socket 1 00:05:36.271 EAL: Detected lcore 127 as core 31 on socket 1 00:05:36.271 EAL: Maximum logical cores by configuration: 128 00:05:36.271 EAL: Detected CPU lcores: 128 00:05:36.271 EAL: Detected NUMA nodes: 2 00:05:36.271 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:36.271 EAL: Detected shared linkage of DPDK 00:05:36.271 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.271 EAL: Bus pci wants IOVA as 'DC' 00:05:36.271 EAL: Buses did not request a specific IOVA mode. 00:05:36.271 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:36.271 EAL: Selected IOVA mode 'VA' 00:05:36.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.271 EAL: Probing VFIO support... 00:05:36.271 EAL: IOMMU type 1 (Type 1) is supported 00:05:36.271 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:36.271 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:36.271 EAL: VFIO support initialized 00:05:36.271 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.271 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.271 EAL: Setting up physically contiguous memory... 00:05:36.271 EAL: Setting maximum number of open files to 524288 00:05:36.271 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.271 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:36.271 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.271 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.271 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.271 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.271 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.271 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.271 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.271 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.272 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.272 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.272 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:36.272 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.272 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:36.272 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.272 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:36.272 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.272 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:36.272 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.272 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:36.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.272 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.272 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:36.272 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:36.272 EAL: Hugepages will be freed exactly as allocated. 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: TSC frequency is ~2600000 KHz 00:05:36.272 EAL: Main lcore 0 is ready (tid=7f44f7eb2a00;cpuset=[0]) 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 0 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.272 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.272 00:05:36.272 00:05:36.272 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.272 http://cunit.sourceforge.net/ 00:05:36.272 00:05:36.272 00:05:36.272 Suite: components_suite 00:05:36.272 Test: vtophys_malloc_test ...passed 00:05:36.272 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.272 EAL: Restoring previous memory policy: 4 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.272 EAL: request: mp_malloc_sync 00:05:36.272 EAL: No shared files mode enabled, IPC is disabled 00:05:36.272 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.272 EAL: Trying to obtain current memory policy. 00:05:36.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.593 EAL: Restoring previous memory policy: 4 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.593 EAL: Trying to obtain current memory policy. 00:05:36.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.593 EAL: Restoring previous memory policy: 4 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.593 EAL: Trying to obtain current memory policy. 00:05:36.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.593 EAL: Restoring previous memory policy: 4 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.593 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.593 EAL: request: mp_malloc_sync 00:05:36.593 EAL: No shared files mode enabled, IPC is disabled 00:05:36.593 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.593 EAL: Trying to obtain current memory policy. 00:05:36.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.857 EAL: Restoring previous memory policy: 4 00:05:36.857 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.857 EAL: request: mp_malloc_sync 00:05:36.857 EAL: No shared files mode enabled, IPC is disabled 00:05:36.857 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.857 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.119 EAL: request: mp_malloc_sync 00:05:37.119 EAL: No shared files mode enabled, IPC is disabled 00:05:37.119 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.119 passed 00:05:37.119 00:05:37.119 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.119 suites 1 1 n/a 0 0 00:05:37.119 tests 2 2 2 0 0 00:05:37.119 asserts 497 497 497 0 n/a 00:05:37.119 00:05:37.119 Elapsed time = 0.647 seconds 00:05:37.119 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.119 EAL: request: mp_malloc_sync 00:05:37.119 EAL: No shared files mode enabled, IPC is disabled 00:05:37.119 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.119 EAL: No shared files mode enabled, IPC is disabled 00:05:37.119 EAL: No shared files mode enabled, IPC is disabled 00:05:37.119 EAL: No shared files mode enabled, IPC is disabled 00:05:37.119 00:05:37.119 real 0m0.810s 00:05:37.119 user 0m0.411s 00:05:37.119 sys 0m0.354s 00:05:37.119 11:12:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.119 11:12:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:37.119 ************************************ 00:05:37.119 END TEST env_vtophys 00:05:37.119 ************************************ 00:05:37.119 11:12:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.119 11:12:34 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:37.119 11:12:34 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.119 11:12:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.119 ************************************ 00:05:37.119 START TEST env_pci 00:05:37.119 ************************************ 00:05:37.119 11:12:34 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.119 00:05:37.119 00:05:37.119 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.119 http://cunit.sourceforge.net/ 00:05:37.119 00:05:37.119 00:05:37.119 Suite: pci 00:05:37.119 Test: pci_hook ...[2024-06-10 11:12:34.215683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1336230 has claimed it 00:05:37.119 EAL: Cannot find device (10000:00:01.0) 00:05:37.119 EAL: Failed to attach device on primary process 00:05:37.119 passed 00:05:37.119 00:05:37.119 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.119 suites 1 1 n/a 0 0 00:05:37.119 tests 1 1 1 0 0 00:05:37.119 asserts 25 25 25 0 n/a 00:05:37.119 00:05:37.119 Elapsed time = 0.035 seconds 00:05:37.119 00:05:37.119 real 0m0.055s 00:05:37.119 user 0m0.018s 00:05:37.119 sys 0m0.037s 00:05:37.119 11:12:34 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.119 11:12:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:37.119 ************************************ 00:05:37.119 END TEST env_pci 00:05:37.119 ************************************ 00:05:37.119 11:12:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:37.119 11:12:34 env -- env/env.sh@15 -- # uname 00:05:37.119 11:12:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:37.119 11:12:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:37.119 11:12:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.119 11:12:34 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:37.119 11:12:34 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.119 11:12:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.119 ************************************ 00:05:37.119 START TEST env_dpdk_post_init 00:05:37.119 ************************************ 00:05:37.119 11:12:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.379 EAL: Detected CPU lcores: 128 00:05:37.379 EAL: Detected NUMA nodes: 2 00:05:37.379 EAL: Detected shared linkage of DPDK 00:05:37.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.379 EAL: Selected IOVA mode 'VA' 00:05:37.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.379 EAL: VFIO support initialized 00:05:37.379 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.379 EAL: Using IOMMU type 1 (Type 1) 00:05:37.379 EAL: Ignore mapping IO port bar(1) 00:05:37.640 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:37.640 EAL: Ignore mapping IO port bar(1) 00:05:37.900 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:37.900 EAL: Ignore mapping IO port bar(1) 00:05:38.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:38.161 EAL: Ignore mapping IO port bar(1) 00:05:38.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:38.422 EAL: Ignore mapping IO port bar(1) 00:05:38.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:38.683 EAL: Ignore mapping IO port bar(1) 00:05:38.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:38.943 EAL: Ignore mapping IO port bar(1) 00:05:38.943 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:38.943 EAL: Ignore mapping IO port bar(1) 00:05:39.204 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:39.776 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:65:00.0 (socket 0) 00:05:40.038 EAL: Ignore mapping IO port bar(1) 00:05:40.038 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:40.299 EAL: Ignore mapping IO port bar(1) 00:05:40.299 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:40.560 EAL: Ignore mapping IO port bar(1) 00:05:40.560 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:40.821 EAL: Ignore mapping IO port bar(1) 00:05:40.821 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:40.821 EAL: Ignore mapping IO port bar(1) 00:05:41.082 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:41.082 EAL: Ignore mapping IO port bar(1) 00:05:41.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:41.343 EAL: Ignore mapping IO port bar(1) 00:05:41.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:41.603 EAL: Ignore mapping IO port bar(1) 00:05:41.603 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:45.821 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:45.821 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:45.821 Starting DPDK initialization... 00:05:45.821 Starting SPDK post initialization... 00:05:45.821 SPDK NVMe probe 00:05:45.821 Attaching to 0000:65:00.0 00:05:45.821 Attached to 0000:65:00.0 00:05:45.822 Cleaning up... 00:05:47.736 00:05:47.736 real 0m10.264s 00:05:47.736 user 0m4.118s 00:05:47.736 sys 0m0.173s 00:05:47.736 11:12:44 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.736 11:12:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 END TEST env_dpdk_post_init 00:05:47.736 ************************************ 00:05:47.736 11:12:44 env -- env/env.sh@26 -- # uname 00:05:47.736 11:12:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.736 11:12:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.736 11:12:44 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.736 11:12:44 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.736 11:12:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 START TEST env_mem_callbacks 00:05:47.736 ************************************ 00:05:47.736 11:12:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.736 EAL: Detected CPU lcores: 128 00:05:47.736 EAL: Detected NUMA nodes: 2 00:05:47.736 EAL: Detected shared linkage of DPDK 00:05:47.736 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.736 EAL: Selected IOVA mode 'VA' 00:05:47.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.736 EAL: VFIO support initialized 00:05:47.736 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.736 00:05:47.736 00:05:47.736 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.736 http://cunit.sourceforge.net/ 00:05:47.736 00:05:47.736 00:05:47.736 Suite: memory 00:05:47.736 Test: test ... 00:05:47.736 register 0x200000200000 2097152 00:05:47.736 malloc 3145728 00:05:47.736 register 0x200000400000 4194304 00:05:47.736 buf 0x200000500000 len 3145728 PASSED 00:05:47.736 malloc 64 00:05:47.736 buf 0x2000004fff40 len 64 PASSED 00:05:47.736 malloc 4194304 00:05:47.736 register 0x200000800000 6291456 00:05:47.736 buf 0x200000a00000 len 4194304 PASSED 00:05:47.736 free 0x200000500000 3145728 00:05:47.736 free 0x2000004fff40 64 00:05:47.736 unregister 0x200000400000 4194304 PASSED 00:05:47.736 free 0x200000a00000 4194304 00:05:47.736 unregister 0x200000800000 6291456 PASSED 00:05:47.736 malloc 8388608 00:05:47.736 register 0x200000400000 10485760 00:05:47.736 buf 0x200000600000 len 8388608 PASSED 00:05:47.736 free 0x200000600000 8388608 00:05:47.736 unregister 0x200000400000 10485760 PASSED 00:05:47.736 passed 00:05:47.736 00:05:47.736 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.736 suites 1 1 n/a 0 0 00:05:47.736 tests 1 1 1 0 0 00:05:47.736 asserts 15 15 15 0 n/a 00:05:47.736 00:05:47.736 Elapsed time = 0.008 seconds 00:05:47.736 00:05:47.736 real 0m0.065s 00:05:47.736 user 0m0.021s 00:05:47.736 sys 0m0.044s 00:05:47.736 11:12:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.736 11:12:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 END TEST env_mem_callbacks 00:05:47.736 ************************************ 00:05:47.736 00:05:47.736 real 0m11.859s 00:05:47.736 user 0m4.921s 00:05:47.736 sys 0m0.944s 00:05:47.736 11:12:44 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.736 11:12:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 END TEST env 00:05:47.736 ************************************ 00:05:47.736 11:12:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.736 11:12:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.736 11:12:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.736 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 START TEST rpc 00:05:47.736 ************************************ 00:05:47.736 11:12:44 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.736 * Looking for test storage... 00:05:47.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.998 11:12:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1338175 00:05:47.998 11:12:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.998 11:12:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:47.998 11:12:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1338175 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@830 -- # '[' -z 1338175 ']' 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:47.998 11:12:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.998 [2024-06-10 11:12:45.020744] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:05:47.998 [2024-06-10 11:12:45.020808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338175 ] 00:05:47.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.998 [2024-06-10 11:12:45.106462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.998 [2024-06-10 11:12:45.177891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.998 [2024-06-10 11:12:45.177926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1338175' to capture a snapshot of events at runtime. 00:05:47.998 [2024-06-10 11:12:45.177933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.998 [2024-06-10 11:12:45.177939] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.998 [2024-06-10 11:12:45.177944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1338175 for offline analysis/debug. 00:05:47.998 [2024-06-10 11:12:45.177965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.942 11:12:45 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:48.942 11:12:45 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:48.942 11:12:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.942 11:12:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.942 11:12:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.942 11:12:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.942 11:12:45 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.942 11:12:45 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.942 11:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.942 ************************************ 00:05:48.942 START TEST rpc_integrity 00:05:48.942 ************************************ 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.942 11:12:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.942 { 00:05:48.942 "name": "Malloc0", 00:05:48.942 "aliases": [ 00:05:48.942 "22a5b694-742a-437d-8801-3f39e78b9554" 00:05:48.942 ], 00:05:48.942 "product_name": "Malloc disk", 00:05:48.942 "block_size": 512, 00:05:48.942 "num_blocks": 16384, 00:05:48.942 "uuid": "22a5b694-742a-437d-8801-3f39e78b9554", 00:05:48.942 "assigned_rate_limits": { 00:05:48.942 "rw_ios_per_sec": 0, 00:05:48.942 "rw_mbytes_per_sec": 0, 00:05:48.942 "r_mbytes_per_sec": 0, 00:05:48.942 "w_mbytes_per_sec": 0 00:05:48.942 }, 00:05:48.942 "claimed": false, 00:05:48.942 "zoned": false, 00:05:48.942 "supported_io_types": { 00:05:48.942 "read": true, 00:05:48.942 "write": true, 00:05:48.942 "unmap": true, 00:05:48.942 "write_zeroes": true, 00:05:48.942 "flush": true, 00:05:48.942 "reset": true, 00:05:48.942 "compare": false, 00:05:48.942 "compare_and_write": false, 00:05:48.942 "abort": true, 00:05:48.942 "nvme_admin": false, 00:05:48.942 "nvme_io": false 00:05:48.942 }, 00:05:48.942 "memory_domains": [ 00:05:48.942 { 00:05:48.942 "dma_device_id": "system", 00:05:48.942 "dma_device_type": 1 00:05:48.942 }, 00:05:48.942 { 00:05:48.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.942 "dma_device_type": 2 00:05:48.942 } 00:05:48.942 ], 00:05:48.942 "driver_specific": {} 00:05:48.942 } 00:05:48.942 ]' 00:05:48.942 11:12:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.942 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.942 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.942 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.942 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.942 [2024-06-10 11:12:46.032746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.942 [2024-06-10 11:12:46.032774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.942 [2024-06-10 11:12:46.032786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1faa600 00:05:48.942 [2024-06-10 11:12:46.032792] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.942 [2024-06-10 11:12:46.034020] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.942 [2024-06-10 11:12:46.034039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.942 Passthru0 00:05:48.942 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.942 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.943 { 00:05:48.943 "name": "Malloc0", 00:05:48.943 "aliases": [ 00:05:48.943 "22a5b694-742a-437d-8801-3f39e78b9554" 00:05:48.943 ], 00:05:48.943 "product_name": "Malloc disk", 00:05:48.943 "block_size": 512, 00:05:48.943 "num_blocks": 16384, 00:05:48.943 "uuid": "22a5b694-742a-437d-8801-3f39e78b9554", 00:05:48.943 "assigned_rate_limits": { 00:05:48.943 "rw_ios_per_sec": 0, 00:05:48.943 "rw_mbytes_per_sec": 0, 00:05:48.943 "r_mbytes_per_sec": 0, 00:05:48.943 "w_mbytes_per_sec": 0 00:05:48.943 }, 00:05:48.943 "claimed": true, 00:05:48.943 "claim_type": "exclusive_write", 00:05:48.943 "zoned": false, 00:05:48.943 "supported_io_types": { 00:05:48.943 "read": true, 00:05:48.943 "write": true, 00:05:48.943 "unmap": true, 00:05:48.943 "write_zeroes": true, 00:05:48.943 "flush": true, 00:05:48.943 "reset": true, 00:05:48.943 "compare": false, 00:05:48.943 "compare_and_write": false, 00:05:48.943 "abort": true, 00:05:48.943 "nvme_admin": false, 00:05:48.943 "nvme_io": false 00:05:48.943 }, 00:05:48.943 "memory_domains": [ 00:05:48.943 { 00:05:48.943 "dma_device_id": "system", 00:05:48.943 "dma_device_type": 1 00:05:48.943 }, 00:05:48.943 { 00:05:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.943 "dma_device_type": 2 00:05:48.943 } 00:05:48.943 ], 00:05:48.943 "driver_specific": {} 00:05:48.943 }, 00:05:48.943 { 00:05:48.943 "name": "Passthru0", 00:05:48.943 "aliases": [ 00:05:48.943 "db9033fa-95bc-5e22-a288-367b73ed3d97" 00:05:48.943 ], 00:05:48.943 "product_name": "passthru", 00:05:48.943 "block_size": 512, 00:05:48.943 "num_blocks": 16384, 00:05:48.943 "uuid": "db9033fa-95bc-5e22-a288-367b73ed3d97", 00:05:48.943 "assigned_rate_limits": { 00:05:48.943 "rw_ios_per_sec": 0, 00:05:48.943 "rw_mbytes_per_sec": 0, 00:05:48.943 "r_mbytes_per_sec": 0, 00:05:48.943 "w_mbytes_per_sec": 0 00:05:48.943 }, 00:05:48.943 "claimed": false, 00:05:48.943 "zoned": false, 00:05:48.943 "supported_io_types": { 00:05:48.943 "read": true, 00:05:48.943 "write": true, 00:05:48.943 "unmap": true, 00:05:48.943 "write_zeroes": true, 00:05:48.943 "flush": true, 00:05:48.943 "reset": true, 00:05:48.943 "compare": false, 00:05:48.943 "compare_and_write": false, 00:05:48.943 "abort": true, 00:05:48.943 "nvme_admin": false, 00:05:48.943 "nvme_io": false 00:05:48.943 }, 00:05:48.943 "memory_domains": [ 00:05:48.943 { 00:05:48.943 "dma_device_id": "system", 00:05:48.943 "dma_device_type": 1 00:05:48.943 }, 00:05:48.943 { 00:05:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.943 "dma_device_type": 2 00:05:48.943 } 00:05:48.943 ], 00:05:48.943 "driver_specific": { 00:05:48.943 "passthru": { 00:05:48.943 "name": "Passthru0", 00:05:48.943 "base_bdev_name": "Malloc0" 00:05:48.943 } 00:05:48.943 } 00:05:48.943 } 00:05:48.943 ]' 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.943 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.943 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.204 11:12:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.204 00:05:49.204 real 0m0.290s 00:05:49.204 user 0m0.187s 00:05:49.204 sys 0m0.037s 00:05:49.204 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.204 11:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 ************************************ 00:05:49.204 END TEST rpc_integrity 00:05:49.204 ************************************ 00:05:49.204 11:12:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.204 11:12:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.204 11:12:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.204 11:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 ************************************ 00:05:49.204 START TEST rpc_plugins 00:05:49.204 ************************************ 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.204 { 00:05:49.204 "name": "Malloc1", 00:05:49.204 "aliases": [ 00:05:49.204 "21957275-524f-424f-bf3a-a8ac3205e662" 00:05:49.204 ], 00:05:49.204 "product_name": "Malloc disk", 00:05:49.204 "block_size": 4096, 00:05:49.204 "num_blocks": 256, 00:05:49.204 "uuid": "21957275-524f-424f-bf3a-a8ac3205e662", 00:05:49.204 "assigned_rate_limits": { 00:05:49.204 "rw_ios_per_sec": 0, 00:05:49.204 "rw_mbytes_per_sec": 0, 00:05:49.204 "r_mbytes_per_sec": 0, 00:05:49.204 "w_mbytes_per_sec": 0 00:05:49.204 }, 00:05:49.204 "claimed": false, 00:05:49.204 "zoned": false, 00:05:49.204 "supported_io_types": { 00:05:49.204 "read": true, 00:05:49.204 "write": true, 00:05:49.204 "unmap": true, 00:05:49.204 "write_zeroes": true, 00:05:49.204 "flush": true, 00:05:49.204 "reset": true, 00:05:49.204 "compare": false, 00:05:49.204 "compare_and_write": false, 00:05:49.204 "abort": true, 00:05:49.204 "nvme_admin": false, 00:05:49.204 "nvme_io": false 00:05:49.204 }, 00:05:49.204 "memory_domains": [ 00:05:49.204 { 00:05:49.204 "dma_device_id": "system", 00:05:49.204 "dma_device_type": 1 00:05:49.204 }, 00:05:49.204 { 00:05:49.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.204 "dma_device_type": 2 00:05:49.204 } 00:05:49.204 ], 00:05:49.204 "driver_specific": {} 00:05:49.204 } 00:05:49.204 ]' 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.204 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.204 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.205 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.205 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.205 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.205 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.205 11:12:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.205 00:05:49.205 real 0m0.151s 00:05:49.205 user 0m0.094s 00:05:49.205 sys 0m0.021s 00:05:49.205 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.205 11:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.205 ************************************ 00:05:49.205 END TEST rpc_plugins 00:05:49.205 ************************************ 00:05:49.466 11:12:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.466 11:12:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.466 11:12:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.466 11:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.466 ************************************ 00:05:49.466 START TEST rpc_trace_cmd_test 00:05:49.466 ************************************ 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.466 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1338175", 00:05:49.466 "tpoint_group_mask": "0x8", 00:05:49.466 "iscsi_conn": { 00:05:49.466 "mask": "0x2", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "scsi": { 00:05:49.466 "mask": "0x4", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "bdev": { 00:05:49.466 "mask": "0x8", 00:05:49.466 "tpoint_mask": "0xffffffffffffffff" 00:05:49.466 }, 00:05:49.466 "nvmf_rdma": { 00:05:49.466 "mask": "0x10", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "nvmf_tcp": { 00:05:49.466 "mask": "0x20", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "ftl": { 00:05:49.466 "mask": "0x40", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "blobfs": { 00:05:49.466 "mask": "0x80", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "dsa": { 00:05:49.466 "mask": "0x200", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "thread": { 00:05:49.466 "mask": "0x400", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "nvme_pcie": { 00:05:49.466 "mask": "0x800", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "iaa": { 00:05:49.466 "mask": "0x1000", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "nvme_tcp": { 00:05:49.466 "mask": "0x2000", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "bdev_nvme": { 00:05:49.466 "mask": "0x4000", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 }, 00:05:49.466 "sock": { 00:05:49.466 "mask": "0x8000", 00:05:49.466 "tpoint_mask": "0x0" 00:05:49.466 } 00:05:49.466 }' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.466 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.728 11:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.728 00:05:49.728 real 0m0.247s 00:05:49.728 user 0m0.212s 00:05:49.728 sys 0m0.025s 00:05:49.728 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 ************************************ 00:05:49.728 END TEST rpc_trace_cmd_test 00:05:49.728 ************************************ 00:05:49.728 11:12:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.728 11:12:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.728 11:12:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.728 11:12:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.728 11:12:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.728 11:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 ************************************ 00:05:49.728 START TEST rpc_daemon_integrity 00:05:49.728 ************************************ 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.728 { 00:05:49.728 "name": "Malloc2", 00:05:49.728 "aliases": [ 00:05:49.728 "55867a70-bdb8-4de8-8f30-f03fc4628196" 00:05:49.728 ], 00:05:49.728 "product_name": "Malloc disk", 00:05:49.728 "block_size": 512, 00:05:49.728 "num_blocks": 16384, 00:05:49.728 "uuid": "55867a70-bdb8-4de8-8f30-f03fc4628196", 00:05:49.728 "assigned_rate_limits": { 00:05:49.728 "rw_ios_per_sec": 0, 00:05:49.728 "rw_mbytes_per_sec": 0, 00:05:49.728 "r_mbytes_per_sec": 0, 00:05:49.728 "w_mbytes_per_sec": 0 00:05:49.728 }, 00:05:49.728 "claimed": false, 00:05:49.728 "zoned": false, 00:05:49.728 "supported_io_types": { 00:05:49.728 "read": true, 00:05:49.728 "write": true, 00:05:49.728 "unmap": true, 00:05:49.728 "write_zeroes": true, 00:05:49.728 "flush": true, 00:05:49.728 "reset": true, 00:05:49.728 "compare": false, 00:05:49.728 "compare_and_write": false, 00:05:49.728 "abort": true, 00:05:49.728 "nvme_admin": false, 00:05:49.728 "nvme_io": false 00:05:49.728 }, 00:05:49.728 "memory_domains": [ 00:05:49.728 { 00:05:49.728 "dma_device_id": "system", 00:05:49.728 "dma_device_type": 1 00:05:49.728 }, 00:05:49.728 { 00:05:49.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.728 "dma_device_type": 2 00:05:49.728 } 00:05:49.728 ], 00:05:49.728 "driver_specific": {} 00:05:49.728 } 00:05:49.728 ]' 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.728 [2024-06-10 11:12:46.931162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.728 [2024-06-10 11:12:46.931188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.728 [2024-06-10 11:12:46.931202] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fa1ed0 00:05:49.728 [2024-06-10 11:12:46.931209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.728 [2024-06-10 11:12:46.932340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.728 [2024-06-10 11:12:46.932358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.728 Passthru0 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.728 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.989 11:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.989 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.989 { 00:05:49.990 "name": "Malloc2", 00:05:49.990 "aliases": [ 00:05:49.990 "55867a70-bdb8-4de8-8f30-f03fc4628196" 00:05:49.990 ], 00:05:49.990 "product_name": "Malloc disk", 00:05:49.990 "block_size": 512, 00:05:49.990 "num_blocks": 16384, 00:05:49.990 "uuid": "55867a70-bdb8-4de8-8f30-f03fc4628196", 00:05:49.990 "assigned_rate_limits": { 00:05:49.990 "rw_ios_per_sec": 0, 00:05:49.990 "rw_mbytes_per_sec": 0, 00:05:49.990 "r_mbytes_per_sec": 0, 00:05:49.990 "w_mbytes_per_sec": 0 00:05:49.990 }, 00:05:49.990 "claimed": true, 00:05:49.990 "claim_type": "exclusive_write", 00:05:49.990 "zoned": false, 00:05:49.990 "supported_io_types": { 00:05:49.990 "read": true, 00:05:49.990 "write": true, 00:05:49.990 "unmap": true, 00:05:49.990 "write_zeroes": true, 00:05:49.990 "flush": true, 00:05:49.990 "reset": true, 00:05:49.990 "compare": false, 00:05:49.990 "compare_and_write": false, 00:05:49.990 "abort": true, 00:05:49.990 "nvme_admin": false, 00:05:49.990 "nvme_io": false 00:05:49.990 }, 00:05:49.990 "memory_domains": [ 00:05:49.990 { 00:05:49.990 "dma_device_id": "system", 00:05:49.990 "dma_device_type": 1 00:05:49.990 }, 00:05:49.990 { 00:05:49.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.990 "dma_device_type": 2 00:05:49.990 } 00:05:49.990 ], 00:05:49.990 "driver_specific": {} 00:05:49.990 }, 00:05:49.990 { 00:05:49.990 "name": "Passthru0", 00:05:49.990 "aliases": [ 00:05:49.990 "e0f3a94b-3cbf-5948-ab1d-8373581ad96e" 00:05:49.990 ], 00:05:49.990 "product_name": "passthru", 00:05:49.990 "block_size": 512, 00:05:49.990 "num_blocks": 16384, 00:05:49.990 "uuid": "e0f3a94b-3cbf-5948-ab1d-8373581ad96e", 00:05:49.990 "assigned_rate_limits": { 00:05:49.990 "rw_ios_per_sec": 0, 00:05:49.990 "rw_mbytes_per_sec": 0, 00:05:49.990 "r_mbytes_per_sec": 0, 00:05:49.990 "w_mbytes_per_sec": 0 00:05:49.990 }, 00:05:49.990 "claimed": false, 00:05:49.990 "zoned": false, 00:05:49.990 "supported_io_types": { 00:05:49.990 "read": true, 00:05:49.990 "write": true, 00:05:49.990 "unmap": true, 00:05:49.990 "write_zeroes": true, 00:05:49.990 "flush": true, 00:05:49.990 "reset": true, 00:05:49.990 "compare": false, 00:05:49.990 "compare_and_write": false, 00:05:49.990 "abort": true, 00:05:49.990 "nvme_admin": false, 00:05:49.990 "nvme_io": false 00:05:49.990 }, 00:05:49.990 "memory_domains": [ 00:05:49.990 { 00:05:49.990 "dma_device_id": "system", 00:05:49.990 "dma_device_type": 1 00:05:49.990 }, 00:05:49.990 { 00:05:49.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.990 "dma_device_type": 2 00:05:49.990 } 00:05:49.990 ], 00:05:49.990 "driver_specific": { 00:05:49.990 "passthru": { 00:05:49.990 "name": "Passthru0", 00:05:49.990 "base_bdev_name": "Malloc2" 00:05:49.990 } 00:05:49.990 } 00:05:49.990 } 00:05:49.990 ]' 00:05:49.990 11:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.990 00:05:49.990 real 0m0.285s 00:05:49.990 user 0m0.188s 00:05:49.990 sys 0m0.034s 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.990 11:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.990 ************************************ 00:05:49.990 END TEST rpc_daemon_integrity 00:05:49.990 ************************************ 00:05:49.990 11:12:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.990 11:12:47 rpc -- rpc/rpc.sh@84 -- # killprocess 1338175 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@949 -- # '[' -z 1338175 ']' 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@953 -- # kill -0 1338175 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@954 -- # uname 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1338175 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1338175' 00:05:49.990 killing process with pid 1338175 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@968 -- # kill 1338175 00:05:49.990 11:12:47 rpc -- common/autotest_common.sh@973 -- # wait 1338175 00:05:50.252 00:05:50.252 real 0m2.509s 00:05:50.252 user 0m3.351s 00:05:50.252 sys 0m0.683s 00:05:50.252 11:12:47 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.252 11:12:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.252 ************************************ 00:05:50.252 END TEST rpc 00:05:50.252 ************************************ 00:05:50.253 11:12:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.253 11:12:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.253 11:12:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.253 11:12:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.253 ************************************ 00:05:50.253 START TEST skip_rpc 00:05:50.253 ************************************ 00:05:50.253 11:12:47 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.515 * Looking for test storage... 00:05:50.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.515 11:12:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.515 11:12:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:50.515 11:12:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.515 11:12:47 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.515 11:12:47 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.515 11:12:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.515 ************************************ 00:05:50.515 START TEST skip_rpc 00:05:50.515 ************************************ 00:05:50.515 11:12:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:50.515 11:12:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1338664 00:05:50.515 11:12:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.515 11:12:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.515 11:12:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.515 [2024-06-10 11:12:47.654679] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:05:50.515 [2024-06-10 11:12:47.654745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338664 ] 00:05:50.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.515 [2024-06-10 11:12:47.735103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.776 [2024-06-10 11:12:47.813285] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1338664 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1338664 ']' 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1338664 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1338664 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:56.065 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:56.066 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1338664' 00:05:56.066 killing process with pid 1338664 00:05:56.066 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1338664 00:05:56.066 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1338664 00:05:56.066 00:05:56.066 real 0m5.278s 00:05:56.066 user 0m5.067s 00:05:56.066 sys 0m0.257s 00:05:56.066 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.066 11:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 ************************************ 00:05:56.066 END TEST skip_rpc 00:05:56.066 ************************************ 00:05:56.066 11:12:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:56.066 11:12:52 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.066 11:12:52 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.066 11:12:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 ************************************ 00:05:56.066 START TEST skip_rpc_with_json 00:05:56.066 ************************************ 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1339610 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1339610 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1339610 ']' 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.066 11:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 [2024-06-10 11:12:52.995082] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:05:56.066 [2024-06-10 11:12:52.995131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339610 ] 00:05:56.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.066 [2024-06-10 11:12:53.077862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.066 [2024-06-10 11:12:53.140415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.635 [2024-06-10 11:12:53.828567] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.635 request: 00:05:56.635 { 00:05:56.635 "trtype": "tcp", 00:05:56.635 "method": "nvmf_get_transports", 00:05:56.635 "req_id": 1 00:05:56.635 } 00:05:56.635 Got JSON-RPC error response 00:05:56.635 response: 00:05:56.635 { 00:05:56.635 "code": -19, 00:05:56.635 "message": "No such device" 00:05:56.635 } 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.635 [2024-06-10 11:12:53.840684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.635 11:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.896 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.896 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.896 { 00:05:56.896 "subsystems": [ 00:05:56.896 { 00:05:56.896 "subsystem": "vfio_user_target", 00:05:56.896 "config": null 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "keyring", 00:05:56.896 "config": [] 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "iobuf", 00:05:56.896 "config": [ 00:05:56.896 { 00:05:56.896 "method": "iobuf_set_options", 00:05:56.896 "params": { 00:05:56.896 "small_pool_count": 8192, 00:05:56.896 "large_pool_count": 1024, 00:05:56.896 "small_bufsize": 8192, 00:05:56.896 "large_bufsize": 135168 00:05:56.896 } 00:05:56.896 } 00:05:56.896 ] 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "sock", 00:05:56.896 "config": [ 00:05:56.896 { 00:05:56.896 "method": "sock_set_default_impl", 00:05:56.896 "params": { 00:05:56.896 "impl_name": "posix" 00:05:56.896 } 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "method": "sock_impl_set_options", 00:05:56.896 "params": { 00:05:56.896 "impl_name": "ssl", 00:05:56.896 "recv_buf_size": 4096, 00:05:56.896 "send_buf_size": 4096, 00:05:56.896 "enable_recv_pipe": true, 00:05:56.896 "enable_quickack": false, 00:05:56.896 "enable_placement_id": 0, 00:05:56.896 "enable_zerocopy_send_server": true, 00:05:56.896 "enable_zerocopy_send_client": false, 00:05:56.896 "zerocopy_threshold": 0, 00:05:56.896 "tls_version": 0, 00:05:56.896 "enable_ktls": false 00:05:56.896 } 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "method": "sock_impl_set_options", 00:05:56.896 "params": { 00:05:56.896 "impl_name": "posix", 00:05:56.896 "recv_buf_size": 2097152, 00:05:56.896 "send_buf_size": 2097152, 00:05:56.896 "enable_recv_pipe": true, 00:05:56.896 "enable_quickack": false, 00:05:56.896 "enable_placement_id": 0, 00:05:56.896 "enable_zerocopy_send_server": true, 00:05:56.896 "enable_zerocopy_send_client": false, 00:05:56.896 "zerocopy_threshold": 0, 00:05:56.896 "tls_version": 0, 00:05:56.896 "enable_ktls": false 00:05:56.896 } 00:05:56.896 } 00:05:56.896 ] 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "vmd", 00:05:56.896 "config": [] 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "accel", 00:05:56.896 "config": [ 00:05:56.896 { 00:05:56.896 "method": "accel_set_options", 00:05:56.896 "params": { 00:05:56.896 "small_cache_size": 128, 00:05:56.896 "large_cache_size": 16, 00:05:56.896 "task_count": 2048, 00:05:56.896 "sequence_count": 2048, 00:05:56.896 "buf_count": 2048 00:05:56.896 } 00:05:56.896 } 00:05:56.896 ] 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "subsystem": "bdev", 00:05:56.896 "config": [ 00:05:56.896 { 00:05:56.896 "method": "bdev_set_options", 00:05:56.896 "params": { 00:05:56.896 "bdev_io_pool_size": 65535, 00:05:56.896 "bdev_io_cache_size": 256, 00:05:56.896 "bdev_auto_examine": true, 00:05:56.896 "iobuf_small_cache_size": 128, 00:05:56.896 "iobuf_large_cache_size": 16 00:05:56.896 } 00:05:56.896 }, 00:05:56.896 { 00:05:56.896 "method": "bdev_raid_set_options", 00:05:56.896 "params": { 00:05:56.896 "process_window_size_kb": 1024 00:05:56.896 } 00:05:56.896 }, 00:05:56.897 { 00:05:56.897 "method": "bdev_iscsi_set_options", 00:05:56.897 "params": { 00:05:56.897 "timeout_sec": 30 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "bdev_nvme_set_options", 00:05:56.897 "params": { 00:05:56.897 "action_on_timeout": "none", 00:05:56.897 "timeout_us": 0, 00:05:56.897 "timeout_admin_us": 0, 00:05:56.897 "keep_alive_timeout_ms": 10000, 00:05:56.897 "arbitration_burst": 0, 00:05:56.897 "low_priority_weight": 0, 00:05:56.897 "medium_priority_weight": 0, 00:05:56.897 "high_priority_weight": 0, 00:05:56.897 "nvme_adminq_poll_period_us": 10000, 00:05:56.897 "nvme_ioq_poll_period_us": 0, 00:05:56.897 "io_queue_requests": 0, 00:05:56.897 "delay_cmd_submit": true, 00:05:56.897 "transport_retry_count": 4, 00:05:56.897 "bdev_retry_count": 3, 00:05:56.897 "transport_ack_timeout": 0, 00:05:56.897 "ctrlr_loss_timeout_sec": 0, 00:05:56.897 "reconnect_delay_sec": 0, 00:05:56.897 "fast_io_fail_timeout_sec": 0, 00:05:56.897 "disable_auto_failback": false, 00:05:56.897 "generate_uuids": false, 00:05:56.897 "transport_tos": 0, 00:05:56.897 "nvme_error_stat": false, 00:05:56.897 "rdma_srq_size": 0, 00:05:56.897 "io_path_stat": false, 00:05:56.897 "allow_accel_sequence": false, 00:05:56.897 "rdma_max_cq_size": 0, 00:05:56.897 "rdma_cm_event_timeout_ms": 0, 00:05:56.897 "dhchap_digests": [ 00:05:56.897 "sha256", 00:05:56.897 "sha384", 00:05:56.897 "sha512" 00:05:56.897 ], 00:05:56.897 "dhchap_dhgroups": [ 00:05:56.897 "null", 00:05:56.897 "ffdhe2048", 00:05:56.897 "ffdhe3072", 00:05:56.897 "ffdhe4096", 00:05:56.897 "ffdhe6144", 00:05:56.897 "ffdhe8192" 00:05:56.897 ] 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "bdev_nvme_set_hotplug", 00:05:56.897 "params": { 00:05:56.897 "period_us": 100000, 00:05:56.897 "enable": false 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "bdev_wait_for_examine" 00:05:56.897 } 00:05:56.897 ] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "scsi", 00:05:56.897 "config": null 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "scheduler", 00:05:56.897 "config": [ 00:05:56.897 { 00:05:56.897 "method": "framework_set_scheduler", 00:05:56.897 "params": { 00:05:56.897 "name": "static" 00:05:56.897 } 00:05:56.897 } 00:05:56.897 ] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "vhost_scsi", 00:05:56.897 "config": [] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "vhost_blk", 00:05:56.897 "config": [] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "ublk", 00:05:56.897 "config": [] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "nbd", 00:05:56.897 "config": [] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "nvmf", 00:05:56.897 "config": [ 00:05:56.897 { 00:05:56.897 "method": "nvmf_set_config", 00:05:56.897 "params": { 00:05:56.897 "discovery_filter": "match_any", 00:05:56.897 "admin_cmd_passthru": { 00:05:56.897 "identify_ctrlr": false 00:05:56.897 } 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "nvmf_set_max_subsystems", 00:05:56.897 "params": { 00:05:56.897 "max_subsystems": 1024 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "nvmf_set_crdt", 00:05:56.897 "params": { 00:05:56.897 "crdt1": 0, 00:05:56.897 "crdt2": 0, 00:05:56.897 "crdt3": 0 00:05:56.897 } 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "method": "nvmf_create_transport", 00:05:56.897 "params": { 00:05:56.897 "trtype": "TCP", 00:05:56.897 "max_queue_depth": 128, 00:05:56.897 "max_io_qpairs_per_ctrlr": 127, 00:05:56.897 "in_capsule_data_size": 4096, 00:05:56.897 "max_io_size": 131072, 00:05:56.897 "io_unit_size": 131072, 00:05:56.897 "max_aq_depth": 128, 00:05:56.897 "num_shared_buffers": 511, 00:05:56.897 "buf_cache_size": 4294967295, 00:05:56.897 "dif_insert_or_strip": false, 00:05:56.897 "zcopy": false, 00:05:56.897 "c2h_success": true, 00:05:56.897 "sock_priority": 0, 00:05:56.897 "abort_timeout_sec": 1, 00:05:56.897 "ack_timeout": 0, 00:05:56.897 "data_wr_pool_size": 0 00:05:56.897 } 00:05:56.897 } 00:05:56.897 ] 00:05:56.897 }, 00:05:56.897 { 00:05:56.897 "subsystem": "iscsi", 00:05:56.897 "config": [ 00:05:56.897 { 00:05:56.897 "method": "iscsi_set_options", 00:05:56.897 "params": { 00:05:56.897 "node_base": "iqn.2016-06.io.spdk", 00:05:56.897 "max_sessions": 128, 00:05:56.897 "max_connections_per_session": 2, 00:05:56.897 "max_queue_depth": 64, 00:05:56.897 "default_time2wait": 2, 00:05:56.897 "default_time2retain": 20, 00:05:56.897 "first_burst_length": 8192, 00:05:56.897 "immediate_data": true, 00:05:56.897 "allow_duplicated_isid": false, 00:05:56.897 "error_recovery_level": 0, 00:05:56.897 "nop_timeout": 60, 00:05:56.897 "nop_in_interval": 30, 00:05:56.897 "disable_chap": false, 00:05:56.897 "require_chap": false, 00:05:56.897 "mutual_chap": false, 00:05:56.897 "chap_group": 0, 00:05:56.897 "max_large_datain_per_connection": 64, 00:05:56.897 "max_r2t_per_connection": 4, 00:05:56.897 "pdu_pool_size": 36864, 00:05:56.897 "immediate_data_pool_size": 16384, 00:05:56.897 "data_out_pool_size": 2048 00:05:56.897 } 00:05:56.897 } 00:05:56.897 ] 00:05:56.897 } 00:05:56.897 ] 00:05:56.897 } 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1339610 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1339610 ']' 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1339610 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1339610 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1339610' 00:05:56.897 killing process with pid 1339610 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1339610 00:05:56.897 11:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1339610 00:05:57.158 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1339919 00:05:57.158 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.158 11:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1339919 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1339919 ']' 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1339919 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:02.472 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1339919 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1339919' 00:06:02.473 killing process with pid 1339919 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1339919 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1339919 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.473 00:06:02.473 real 0m6.592s 00:06:02.473 user 0m6.543s 00:06:02.473 sys 0m0.526s 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.473 ************************************ 00:06:02.473 END TEST skip_rpc_with_json 00:06:02.473 ************************************ 00:06:02.473 11:12:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:02.473 11:12:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.473 11:12:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.473 11:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.473 ************************************ 00:06:02.473 START TEST skip_rpc_with_delay 00:06:02.473 ************************************ 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.473 [2024-06-10 11:12:59.667286] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:02.473 [2024-06-10 11:12:59.667353] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:02.473 00:06:02.473 real 0m0.071s 00:06:02.473 user 0m0.041s 00:06:02.473 sys 0m0.030s 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.473 11:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.473 ************************************ 00:06:02.473 END TEST skip_rpc_with_delay 00:06:02.473 ************************************ 00:06:02.734 11:12:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.734 11:12:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.734 11:12:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.734 11:12:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.734 11:12:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.734 11:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.734 ************************************ 00:06:02.734 START TEST exit_on_failed_rpc_init 00:06:02.734 ************************************ 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1340884 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1340884 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1340884 ']' 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:02.734 11:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.734 [2024-06-10 11:12:59.823782] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:02.734 [2024-06-10 11:12:59.823835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340884 ] 00:06:02.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.734 [2024-06-10 11:12:59.905519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.995 [2024-06-10 11:12:59.967798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.564 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.565 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.565 [2024-06-10 11:13:00.705751] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:03.565 [2024-06-10 11:13:00.705803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340928 ] 00:06:03.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.565 [2024-06-10 11:13:00.767068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.825 [2024-06-10 11:13:00.828576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.825 [2024-06-10 11:13:00.828639] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.825 [2024-06-10 11:13:00.828648] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.825 [2024-06-10 11:13:00.828654] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1340884 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1340884 ']' 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1340884 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1340884 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1340884' 00:06:03.825 killing process with pid 1340884 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1340884 00:06:03.825 11:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1340884 00:06:04.087 00:06:04.087 real 0m1.382s 00:06:04.087 user 0m1.649s 00:06:04.087 sys 0m0.380s 00:06:04.087 11:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.087 11:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.087 ************************************ 00:06:04.087 END TEST exit_on_failed_rpc_init 00:06:04.087 ************************************ 00:06:04.087 11:13:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.087 00:06:04.087 real 0m13.735s 00:06:04.087 user 0m13.453s 00:06:04.087 sys 0m1.475s 00:06:04.087 11:13:01 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.087 11:13:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.087 ************************************ 00:06:04.087 END TEST skip_rpc 00:06:04.087 ************************************ 00:06:04.087 11:13:01 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.087 11:13:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.087 11:13:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.087 11:13:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.087 ************************************ 00:06:04.087 START TEST rpc_client 00:06:04.087 ************************************ 00:06:04.087 11:13:01 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.348 * Looking for test storage... 00:06:04.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:04.348 11:13:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:04.348 OK 00:06:04.348 11:13:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:04.348 00:06:04.348 real 0m0.127s 00:06:04.348 user 0m0.061s 00:06:04.348 sys 0m0.074s 00:06:04.348 11:13:01 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.348 11:13:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:04.348 ************************************ 00:06:04.348 END TEST rpc_client 00:06:04.348 ************************************ 00:06:04.348 11:13:01 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.348 11:13:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.348 11:13:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.348 11:13:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.348 ************************************ 00:06:04.348 START TEST json_config 00:06:04.348 ************************************ 00:06:04.348 11:13:01 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.348 11:13:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.348 11:13:01 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.348 11:13:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.348 11:13:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.348 11:13:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.349 11:13:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.349 11:13:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.349 11:13:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.349 11:13:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.349 11:13:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@47 -- # : 0 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.349 11:13:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:04.349 INFO: JSON configuration test init 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:04.349 11:13:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:04.349 11:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:04.349 11:13:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:04.349 11:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 11:13:01 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:04.349 11:13:01 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.349 11:13:01 json_config -- json_config/common.sh@10 -- # shift 00:06:04.610 11:13:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.610 11:13:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.610 11:13:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.610 11:13:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.610 11:13:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.610 11:13:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1341309 00:06:04.610 11:13:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.610 Waiting for target to run... 00:06:04.610 11:13:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1341309 /var/tmp/spdk_tgt.sock 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@830 -- # '[' -z 1341309 ']' 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.610 11:13:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.610 11:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.610 [2024-06-10 11:13:01.640009] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:04.610 [2024-06-10 11:13:01.640072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341309 ] 00:06:04.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.871 [2024-06-10 11:13:01.958654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.871 [2024-06-10 11:13:02.016911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:05.443 11:13:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.443 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:05.443 11:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:05.443 11:13:02 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:05.443 11:13:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:08.795 11:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:08.795 11:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:08.795 11:13:05 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.795 11:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.795 MallocForNvmf0 00:06:09.054 11:13:06 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.054 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.054 MallocForNvmf1 00:06:09.054 11:13:06 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.055 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.314 [2024-06-10 11:13:06.339852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.314 11:13:06 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.314 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.613 11:13:06 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.613 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.613 11:13:06 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.613 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.872 11:13:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.872 11:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.872 [2024-06-10 11:13:07.094330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.133 11:13:07 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:10.133 11:13:07 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:10.133 11:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.133 11:13:07 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:10.133 11:13:07 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:10.133 11:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.133 11:13:07 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:10.133 11:13:07 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.133 11:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.392 MallocBdevForConfigChangeCheck 00:06:10.392 11:13:07 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:10.392 11:13:07 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:10.392 11:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.392 11:13:07 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:10.392 11:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.652 11:13:07 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:10.652 INFO: shutting down applications... 00:06:10.652 11:13:07 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:10.652 11:13:07 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:10.652 11:13:07 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:10.652 11:13:07 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:13.194 Calling clear_iscsi_subsystem 00:06:13.194 Calling clear_nvmf_subsystem 00:06:13.194 Calling clear_nbd_subsystem 00:06:13.194 Calling clear_ublk_subsystem 00:06:13.194 Calling clear_vhost_blk_subsystem 00:06:13.194 Calling clear_vhost_scsi_subsystem 00:06:13.194 Calling clear_bdev_subsystem 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:13.194 11:13:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:13.454 11:13:10 json_config -- json_config/json_config.sh@345 -- # break 00:06:13.454 11:13:10 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:13.454 11:13:10 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:13.454 11:13:10 json_config -- json_config/common.sh@31 -- # local app=target 00:06:13.454 11:13:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.454 11:13:10 json_config -- json_config/common.sh@35 -- # [[ -n 1341309 ]] 00:06:13.454 11:13:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1341309 00:06:13.454 11:13:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.454 11:13:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.454 11:13:10 json_config -- json_config/common.sh@41 -- # kill -0 1341309 00:06:13.454 11:13:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.025 11:13:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.025 11:13:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.025 11:13:11 json_config -- json_config/common.sh@41 -- # kill -0 1341309 00:06:14.025 11:13:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:14.025 11:13:11 json_config -- json_config/common.sh@43 -- # break 00:06:14.025 11:13:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:14.025 11:13:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:14.025 SPDK target shutdown done 00:06:14.025 11:13:11 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:14.025 INFO: relaunching applications... 00:06:14.025 11:13:11 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.025 11:13:11 json_config -- json_config/common.sh@9 -- # local app=target 00:06:14.025 11:13:11 json_config -- json_config/common.sh@10 -- # shift 00:06:14.025 11:13:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.025 11:13:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.025 11:13:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.025 11:13:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.025 11:13:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.025 11:13:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1342969 00:06:14.025 11:13:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.025 Waiting for target to run... 00:06:14.025 11:13:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1342969 /var/tmp/spdk_tgt.sock 00:06:14.025 11:13:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@830 -- # '[' -z 1342969 ']' 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.025 11:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.025 [2024-06-10 11:13:11.066779] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:14.025 [2024-06-10 11:13:11.066841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342969 ] 00:06:14.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.286 [2024-06-10 11:13:11.405365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.286 [2024-06-10 11:13:11.469028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.583 [2024-06-10 11:13:14.489877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.583 [2024-06-10 11:13:14.522309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.583 11:13:14 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:17.583 11:13:14 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:17.583 11:13:14 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.583 00:06:17.583 11:13:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:17.583 11:13:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.583 INFO: Checking if target configuration is the same... 00:06:17.583 11:13:14 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.583 11:13:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:17.583 11:13:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.583 + '[' 2 -ne 2 ']' 00:06:17.583 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.583 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.583 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.583 +++ basename /dev/fd/62 00:06:17.583 ++ mktemp /tmp/62.XXX 00:06:17.583 + tmp_file_1=/tmp/62.Qfr 00:06:17.583 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.583 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.583 + tmp_file_2=/tmp/spdk_tgt_config.json.yAc 00:06:17.583 + ret=0 00:06:17.583 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.843 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.843 + diff -u /tmp/62.Qfr /tmp/spdk_tgt_config.json.yAc 00:06:17.843 + echo 'INFO: JSON config files are the same' 00:06:17.843 INFO: JSON config files are the same 00:06:17.843 + rm /tmp/62.Qfr /tmp/spdk_tgt_config.json.yAc 00:06:17.843 + exit 0 00:06:17.843 11:13:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:17.843 11:13:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:17.843 INFO: changing configuration and checking if this can be detected... 00:06:17.843 11:13:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.843 11:13:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.103 11:13:15 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.103 11:13:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:18.104 11:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.104 + '[' 2 -ne 2 ']' 00:06:18.104 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.104 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.104 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.104 +++ basename /dev/fd/62 00:06:18.104 ++ mktemp /tmp/62.XXX 00:06:18.104 + tmp_file_1=/tmp/62.58B 00:06:18.104 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.104 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.104 + tmp_file_2=/tmp/spdk_tgt_config.json.ErD 00:06:18.104 + ret=0 00:06:18.104 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.364 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.364 + diff -u /tmp/62.58B /tmp/spdk_tgt_config.json.ErD 00:06:18.364 + ret=1 00:06:18.364 + echo '=== Start of file: /tmp/62.58B ===' 00:06:18.364 + cat /tmp/62.58B 00:06:18.364 + echo '=== End of file: /tmp/62.58B ===' 00:06:18.364 + echo '' 00:06:18.364 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ErD ===' 00:06:18.364 + cat /tmp/spdk_tgt_config.json.ErD 00:06:18.364 + echo '=== End of file: /tmp/spdk_tgt_config.json.ErD ===' 00:06:18.364 + echo '' 00:06:18.364 + rm /tmp/62.58B /tmp/spdk_tgt_config.json.ErD 00:06:18.364 + exit 1 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:18.364 INFO: configuration change detected. 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@317 -- # [[ -n 1342969 ]] 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.364 11:13:15 json_config -- json_config/json_config.sh@323 -- # killprocess 1342969 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@949 -- # '[' -z 1342969 ']' 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@953 -- # kill -0 1342969 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@954 -- # uname 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:18.364 11:13:15 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1342969 00:06:18.625 11:13:15 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:18.625 11:13:15 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:18.625 11:13:15 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1342969' 00:06:18.625 killing process with pid 1342969 00:06:18.625 11:13:15 json_config -- common/autotest_common.sh@968 -- # kill 1342969 00:06:18.625 11:13:15 json_config -- common/autotest_common.sh@973 -- # wait 1342969 00:06:21.169 11:13:17 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.169 11:13:17 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:21.169 11:13:17 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:21.169 11:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.169 11:13:17 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:21.169 11:13:17 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:21.169 INFO: Success 00:06:21.169 00:06:21.169 real 0m16.477s 00:06:21.169 user 0m17.351s 00:06:21.169 sys 0m1.900s 00:06:21.169 11:13:17 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.169 11:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.169 ************************************ 00:06:21.169 END TEST json_config 00:06:21.169 ************************************ 00:06:21.169 11:13:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.169 11:13:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:21.169 11:13:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.169 11:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:21.169 ************************************ 00:06:21.169 START TEST json_config_extra_key 00:06:21.169 ************************************ 00:06:21.169 11:13:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.169 11:13:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.169 11:13:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.169 11:13:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.169 11:13:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.169 11:13:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.169 11:13:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.169 11:13:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:21.169 11:13:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.169 11:13:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:21.169 INFO: launching applications... 00:06:21.169 11:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.169 11:13:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1344304 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.170 Waiting for target to run... 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1344304 /var/tmp/spdk_tgt.sock 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1344304 ']' 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.170 11:13:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.170 11:13:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.170 [2024-06-10 11:13:18.170127] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:21.170 [2024-06-10 11:13:18.170197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344304 ] 00:06:21.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.430 [2024-06-10 11:13:18.506445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.430 [2024-06-10 11:13:18.564056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.000 11:13:19 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.000 11:13:19 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:22.000 00:06:22.000 11:13:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:22.000 INFO: shutting down applications... 00:06:22.000 11:13:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1344304 ]] 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1344304 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1344304 00:06:22.000 11:13:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1344304 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.570 11:13:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.570 SPDK target shutdown done 00:06:22.570 11:13:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.570 Success 00:06:22.570 00:06:22.570 real 0m1.509s 00:06:22.570 user 0m1.149s 00:06:22.570 sys 0m0.437s 00:06:22.570 11:13:19 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.570 11:13:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.570 ************************************ 00:06:22.570 END TEST json_config_extra_key 00:06:22.570 ************************************ 00:06:22.570 11:13:19 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.570 11:13:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.570 11:13:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.570 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:06:22.570 ************************************ 00:06:22.570 START TEST alias_rpc 00:06:22.570 ************************************ 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.570 * Looking for test storage... 00:06:22.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.570 11:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.570 11:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1344659 00:06:22.570 11:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1344659 00:06:22.570 11:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1344659 ']' 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.570 11:13:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.570 [2024-06-10 11:13:19.748196] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:22.570 [2024-06-10 11:13:19.748260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344659 ] 00:06:22.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.831 [2024-06-10 11:13:19.830893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.831 [2024-06-10 11:13:19.899506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.403 11:13:20 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.403 11:13:20 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:23.403 11:13:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:23.663 11:13:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1344659 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1344659 ']' 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1344659 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1344659 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1344659' 00:06:23.663 killing process with pid 1344659 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@968 -- # kill 1344659 00:06:23.663 11:13:20 alias_rpc -- common/autotest_common.sh@973 -- # wait 1344659 00:06:23.924 00:06:23.924 real 0m1.463s 00:06:23.924 user 0m1.674s 00:06:23.924 sys 0m0.398s 00:06:23.924 11:13:21 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.924 11:13:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.924 ************************************ 00:06:23.924 END TEST alias_rpc 00:06:23.924 ************************************ 00:06:23.924 11:13:21 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:23.924 11:13:21 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.924 11:13:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:23.924 11:13:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.924 11:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:23.924 ************************************ 00:06:23.924 START TEST spdkcli_tcp 00:06:23.924 ************************************ 00:06:23.924 11:13:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:24.184 * Looking for test storage... 00:06:24.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1345012 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1345012 00:06:24.184 11:13:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1345012 ']' 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:24.184 11:13:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.184 [2024-06-10 11:13:21.294159] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:24.185 [2024-06-10 11:13:21.294262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345012 ] 00:06:24.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.185 [2024-06-10 11:13:21.383675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.444 [2024-06-10 11:13:21.451514] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.444 [2024-06-10 11:13:21.451519] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.024 11:13:22 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:25.024 11:13:22 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:06:25.024 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:25.024 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1345046 00:06:25.024 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:25.289 [ 00:06:25.289 "bdev_malloc_delete", 00:06:25.289 "bdev_malloc_create", 00:06:25.289 "bdev_null_resize", 00:06:25.289 "bdev_null_delete", 00:06:25.289 "bdev_null_create", 00:06:25.289 "bdev_nvme_cuse_unregister", 00:06:25.289 "bdev_nvme_cuse_register", 00:06:25.289 "bdev_opal_new_user", 00:06:25.289 "bdev_opal_set_lock_state", 00:06:25.289 "bdev_opal_delete", 00:06:25.289 "bdev_opal_get_info", 00:06:25.289 "bdev_opal_create", 00:06:25.289 "bdev_nvme_opal_revert", 00:06:25.289 "bdev_nvme_opal_init", 00:06:25.289 "bdev_nvme_send_cmd", 00:06:25.289 "bdev_nvme_get_path_iostat", 00:06:25.289 "bdev_nvme_get_mdns_discovery_info", 00:06:25.289 "bdev_nvme_stop_mdns_discovery", 00:06:25.289 "bdev_nvme_start_mdns_discovery", 00:06:25.289 "bdev_nvme_set_multipath_policy", 00:06:25.289 "bdev_nvme_set_preferred_path", 00:06:25.289 "bdev_nvme_get_io_paths", 00:06:25.289 "bdev_nvme_remove_error_injection", 00:06:25.289 "bdev_nvme_add_error_injection", 00:06:25.289 "bdev_nvme_get_discovery_info", 00:06:25.289 "bdev_nvme_stop_discovery", 00:06:25.289 "bdev_nvme_start_discovery", 00:06:25.289 "bdev_nvme_get_controller_health_info", 00:06:25.289 "bdev_nvme_disable_controller", 00:06:25.289 "bdev_nvme_enable_controller", 00:06:25.289 "bdev_nvme_reset_controller", 00:06:25.289 "bdev_nvme_get_transport_statistics", 00:06:25.289 "bdev_nvme_apply_firmware", 00:06:25.289 "bdev_nvme_detach_controller", 00:06:25.289 "bdev_nvme_get_controllers", 00:06:25.289 "bdev_nvme_attach_controller", 00:06:25.289 "bdev_nvme_set_hotplug", 00:06:25.289 "bdev_nvme_set_options", 00:06:25.289 "bdev_passthru_delete", 00:06:25.289 "bdev_passthru_create", 00:06:25.289 "bdev_lvol_set_parent_bdev", 00:06:25.290 "bdev_lvol_set_parent", 00:06:25.290 "bdev_lvol_check_shallow_copy", 00:06:25.290 "bdev_lvol_start_shallow_copy", 00:06:25.290 "bdev_lvol_grow_lvstore", 00:06:25.290 "bdev_lvol_get_lvols", 00:06:25.290 "bdev_lvol_get_lvstores", 00:06:25.290 "bdev_lvol_delete", 00:06:25.290 "bdev_lvol_set_read_only", 00:06:25.290 "bdev_lvol_resize", 00:06:25.290 "bdev_lvol_decouple_parent", 00:06:25.290 "bdev_lvol_inflate", 00:06:25.290 "bdev_lvol_rename", 00:06:25.290 "bdev_lvol_clone_bdev", 00:06:25.290 "bdev_lvol_clone", 00:06:25.290 "bdev_lvol_snapshot", 00:06:25.290 "bdev_lvol_create", 00:06:25.290 "bdev_lvol_delete_lvstore", 00:06:25.290 "bdev_lvol_rename_lvstore", 00:06:25.290 "bdev_lvol_create_lvstore", 00:06:25.290 "bdev_raid_set_options", 00:06:25.290 "bdev_raid_remove_base_bdev", 00:06:25.290 "bdev_raid_add_base_bdev", 00:06:25.290 "bdev_raid_delete", 00:06:25.290 "bdev_raid_create", 00:06:25.290 "bdev_raid_get_bdevs", 00:06:25.290 "bdev_error_inject_error", 00:06:25.290 "bdev_error_delete", 00:06:25.290 "bdev_error_create", 00:06:25.290 "bdev_split_delete", 00:06:25.290 "bdev_split_create", 00:06:25.290 "bdev_delay_delete", 00:06:25.290 "bdev_delay_create", 00:06:25.290 "bdev_delay_update_latency", 00:06:25.290 "bdev_zone_block_delete", 00:06:25.290 "bdev_zone_block_create", 00:06:25.290 "blobfs_create", 00:06:25.290 "blobfs_detect", 00:06:25.290 "blobfs_set_cache_size", 00:06:25.290 "bdev_aio_delete", 00:06:25.290 "bdev_aio_rescan", 00:06:25.290 "bdev_aio_create", 00:06:25.290 "bdev_ftl_set_property", 00:06:25.290 "bdev_ftl_get_properties", 00:06:25.290 "bdev_ftl_get_stats", 00:06:25.290 "bdev_ftl_unmap", 00:06:25.290 "bdev_ftl_unload", 00:06:25.290 "bdev_ftl_delete", 00:06:25.290 "bdev_ftl_load", 00:06:25.290 "bdev_ftl_create", 00:06:25.290 "bdev_virtio_attach_controller", 00:06:25.290 "bdev_virtio_scsi_get_devices", 00:06:25.290 "bdev_virtio_detach_controller", 00:06:25.290 "bdev_virtio_blk_set_hotplug", 00:06:25.290 "bdev_iscsi_delete", 00:06:25.290 "bdev_iscsi_create", 00:06:25.290 "bdev_iscsi_set_options", 00:06:25.290 "accel_error_inject_error", 00:06:25.290 "ioat_scan_accel_module", 00:06:25.290 "dsa_scan_accel_module", 00:06:25.290 "iaa_scan_accel_module", 00:06:25.290 "vfu_virtio_create_scsi_endpoint", 00:06:25.290 "vfu_virtio_scsi_remove_target", 00:06:25.290 "vfu_virtio_scsi_add_target", 00:06:25.290 "vfu_virtio_create_blk_endpoint", 00:06:25.290 "vfu_virtio_delete_endpoint", 00:06:25.290 "keyring_file_remove_key", 00:06:25.290 "keyring_file_add_key", 00:06:25.290 "keyring_linux_set_options", 00:06:25.290 "iscsi_get_histogram", 00:06:25.290 "iscsi_enable_histogram", 00:06:25.290 "iscsi_set_options", 00:06:25.290 "iscsi_get_auth_groups", 00:06:25.290 "iscsi_auth_group_remove_secret", 00:06:25.290 "iscsi_auth_group_add_secret", 00:06:25.290 "iscsi_delete_auth_group", 00:06:25.290 "iscsi_create_auth_group", 00:06:25.290 "iscsi_set_discovery_auth", 00:06:25.290 "iscsi_get_options", 00:06:25.290 "iscsi_target_node_request_logout", 00:06:25.290 "iscsi_target_node_set_redirect", 00:06:25.290 "iscsi_target_node_set_auth", 00:06:25.290 "iscsi_target_node_add_lun", 00:06:25.290 "iscsi_get_stats", 00:06:25.290 "iscsi_get_connections", 00:06:25.290 "iscsi_portal_group_set_auth", 00:06:25.290 "iscsi_start_portal_group", 00:06:25.290 "iscsi_delete_portal_group", 00:06:25.290 "iscsi_create_portal_group", 00:06:25.290 "iscsi_get_portal_groups", 00:06:25.290 "iscsi_delete_target_node", 00:06:25.290 "iscsi_target_node_remove_pg_ig_maps", 00:06:25.290 "iscsi_target_node_add_pg_ig_maps", 00:06:25.290 "iscsi_create_target_node", 00:06:25.290 "iscsi_get_target_nodes", 00:06:25.290 "iscsi_delete_initiator_group", 00:06:25.290 "iscsi_initiator_group_remove_initiators", 00:06:25.290 "iscsi_initiator_group_add_initiators", 00:06:25.290 "iscsi_create_initiator_group", 00:06:25.290 "iscsi_get_initiator_groups", 00:06:25.290 "nvmf_set_crdt", 00:06:25.290 "nvmf_set_config", 00:06:25.290 "nvmf_set_max_subsystems", 00:06:25.290 "nvmf_stop_mdns_prr", 00:06:25.290 "nvmf_publish_mdns_prr", 00:06:25.290 "nvmf_subsystem_get_listeners", 00:06:25.290 "nvmf_subsystem_get_qpairs", 00:06:25.290 "nvmf_subsystem_get_controllers", 00:06:25.290 "nvmf_get_stats", 00:06:25.290 "nvmf_get_transports", 00:06:25.290 "nvmf_create_transport", 00:06:25.290 "nvmf_get_targets", 00:06:25.290 "nvmf_delete_target", 00:06:25.290 "nvmf_create_target", 00:06:25.290 "nvmf_subsystem_allow_any_host", 00:06:25.290 "nvmf_subsystem_remove_host", 00:06:25.290 "nvmf_subsystem_add_host", 00:06:25.290 "nvmf_ns_remove_host", 00:06:25.290 "nvmf_ns_add_host", 00:06:25.290 "nvmf_subsystem_remove_ns", 00:06:25.290 "nvmf_subsystem_add_ns", 00:06:25.290 "nvmf_subsystem_listener_set_ana_state", 00:06:25.290 "nvmf_discovery_get_referrals", 00:06:25.290 "nvmf_discovery_remove_referral", 00:06:25.290 "nvmf_discovery_add_referral", 00:06:25.290 "nvmf_subsystem_remove_listener", 00:06:25.290 "nvmf_subsystem_add_listener", 00:06:25.290 "nvmf_delete_subsystem", 00:06:25.290 "nvmf_create_subsystem", 00:06:25.290 "nvmf_get_subsystems", 00:06:25.290 "env_dpdk_get_mem_stats", 00:06:25.290 "nbd_get_disks", 00:06:25.290 "nbd_stop_disk", 00:06:25.290 "nbd_start_disk", 00:06:25.290 "ublk_recover_disk", 00:06:25.290 "ublk_get_disks", 00:06:25.290 "ublk_stop_disk", 00:06:25.290 "ublk_start_disk", 00:06:25.290 "ublk_destroy_target", 00:06:25.290 "ublk_create_target", 00:06:25.290 "virtio_blk_create_transport", 00:06:25.290 "virtio_blk_get_transports", 00:06:25.290 "vhost_controller_set_coalescing", 00:06:25.290 "vhost_get_controllers", 00:06:25.290 "vhost_delete_controller", 00:06:25.290 "vhost_create_blk_controller", 00:06:25.290 "vhost_scsi_controller_remove_target", 00:06:25.290 "vhost_scsi_controller_add_target", 00:06:25.290 "vhost_start_scsi_controller", 00:06:25.290 "vhost_create_scsi_controller", 00:06:25.290 "thread_set_cpumask", 00:06:25.290 "framework_get_scheduler", 00:06:25.290 "framework_set_scheduler", 00:06:25.290 "framework_get_reactors", 00:06:25.290 "thread_get_io_channels", 00:06:25.290 "thread_get_pollers", 00:06:25.290 "thread_get_stats", 00:06:25.290 "framework_monitor_context_switch", 00:06:25.290 "spdk_kill_instance", 00:06:25.290 "log_enable_timestamps", 00:06:25.290 "log_get_flags", 00:06:25.290 "log_clear_flag", 00:06:25.290 "log_set_flag", 00:06:25.290 "log_get_level", 00:06:25.290 "log_set_level", 00:06:25.290 "log_get_print_level", 00:06:25.290 "log_set_print_level", 00:06:25.290 "framework_enable_cpumask_locks", 00:06:25.290 "framework_disable_cpumask_locks", 00:06:25.290 "framework_wait_init", 00:06:25.290 "framework_start_init", 00:06:25.290 "scsi_get_devices", 00:06:25.290 "bdev_get_histogram", 00:06:25.290 "bdev_enable_histogram", 00:06:25.290 "bdev_set_qos_limit", 00:06:25.290 "bdev_set_qd_sampling_period", 00:06:25.290 "bdev_get_bdevs", 00:06:25.290 "bdev_reset_iostat", 00:06:25.290 "bdev_get_iostat", 00:06:25.290 "bdev_examine", 00:06:25.290 "bdev_wait_for_examine", 00:06:25.290 "bdev_set_options", 00:06:25.290 "notify_get_notifications", 00:06:25.290 "notify_get_types", 00:06:25.290 "accel_get_stats", 00:06:25.290 "accel_set_options", 00:06:25.290 "accel_set_driver", 00:06:25.290 "accel_crypto_key_destroy", 00:06:25.290 "accel_crypto_keys_get", 00:06:25.290 "accel_crypto_key_create", 00:06:25.290 "accel_assign_opc", 00:06:25.290 "accel_get_module_info", 00:06:25.290 "accel_get_opc_assignments", 00:06:25.290 "vmd_rescan", 00:06:25.290 "vmd_remove_device", 00:06:25.290 "vmd_enable", 00:06:25.290 "sock_get_default_impl", 00:06:25.290 "sock_set_default_impl", 00:06:25.290 "sock_impl_set_options", 00:06:25.290 "sock_impl_get_options", 00:06:25.290 "iobuf_get_stats", 00:06:25.290 "iobuf_set_options", 00:06:25.290 "keyring_get_keys", 00:06:25.290 "framework_get_pci_devices", 00:06:25.290 "framework_get_config", 00:06:25.290 "framework_get_subsystems", 00:06:25.290 "vfu_tgt_set_base_path", 00:06:25.290 "trace_get_info", 00:06:25.290 "trace_get_tpoint_group_mask", 00:06:25.290 "trace_disable_tpoint_group", 00:06:25.290 "trace_enable_tpoint_group", 00:06:25.290 "trace_clear_tpoint_mask", 00:06:25.290 "trace_set_tpoint_mask", 00:06:25.290 "spdk_get_version", 00:06:25.290 "rpc_get_methods" 00:06:25.290 ] 00:06:25.290 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:25.290 11:13:22 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:25.290 11:13:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.291 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:25.291 11:13:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1345012 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1345012 ']' 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1345012 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1345012 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1345012' 00:06:25.291 killing process with pid 1345012 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1345012 00:06:25.291 11:13:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1345012 00:06:25.551 00:06:25.551 real 0m1.502s 00:06:25.551 user 0m2.837s 00:06:25.551 sys 0m0.444s 00:06:25.551 11:13:22 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.551 11:13:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.551 ************************************ 00:06:25.551 END TEST spdkcli_tcp 00:06:25.551 ************************************ 00:06:25.551 11:13:22 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.551 11:13:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.552 11:13:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.552 11:13:22 -- common/autotest_common.sh@10 -- # set +x 00:06:25.552 ************************************ 00:06:25.552 START TEST dpdk_mem_utility 00:06:25.552 ************************************ 00:06:25.552 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.812 * Looking for test storage... 00:06:25.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.812 11:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.812 11:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1345387 00:06:25.812 11:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1345387 00:06:25.812 11:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1345387 ']' 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.812 11:13:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 [2024-06-10 11:13:22.859705] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:25.812 [2024-06-10 11:13:22.859773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345387 ] 00:06:25.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.812 [2024-06-10 11:13:22.948485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.812 [2024-06-10 11:13:23.015933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.751 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.751 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:06:26.751 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:26.751 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:26.751 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.751 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 { 00:06:26.751 "filename": "/tmp/spdk_mem_dump.txt" 00:06:26.751 } 00:06:26.751 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.751 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:26.751 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:26.751 1 heaps totaling size 814.000000 MiB 00:06:26.751 size: 814.000000 MiB heap id: 0 00:06:26.751 end heaps---------- 00:06:26.751 8 mempools totaling size 598.116089 MiB 00:06:26.751 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:26.751 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:26.751 size: 84.521057 MiB name: bdev_io_1345387 00:06:26.751 size: 51.011292 MiB name: evtpool_1345387 00:06:26.751 size: 50.003479 MiB name: msgpool_1345387 00:06:26.751 size: 21.763794 MiB name: PDU_Pool 00:06:26.751 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:26.751 size: 0.026123 MiB name: Session_Pool 00:06:26.751 end mempools------- 00:06:26.751 6 memzones totaling size 4.142822 MiB 00:06:26.751 size: 1.000366 MiB name: RG_ring_0_1345387 00:06:26.751 size: 1.000366 MiB name: RG_ring_1_1345387 00:06:26.751 size: 1.000366 MiB name: RG_ring_4_1345387 00:06:26.751 size: 1.000366 MiB name: RG_ring_5_1345387 00:06:26.751 size: 0.125366 MiB name: RG_ring_2_1345387 00:06:26.751 size: 0.015991 MiB name: RG_ring_3_1345387 00:06:26.751 end memzones------- 00:06:26.751 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:26.751 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:26.751 list of free elements. size: 12.519348 MiB 00:06:26.751 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:26.751 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:26.751 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:26.751 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:26.751 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:26.751 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:26.751 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:26.751 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:26.751 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:26.751 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:26.751 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:26.751 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:26.751 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:26.751 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:26.751 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:26.751 list of standard malloc elements. size: 199.218079 MiB 00:06:26.751 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:26.751 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:26.751 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:26.751 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:26.751 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:26.751 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:26.751 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:26.751 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:26.751 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:26.751 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:26.751 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:26.751 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:26.751 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:26.751 list of memzone associated elements. size: 602.262573 MiB 00:06:26.751 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:26.751 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:26.751 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:26.751 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:26.751 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:26.751 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1345387_0 00:06:26.751 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:26.751 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1345387_0 00:06:26.751 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:26.751 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1345387_0 00:06:26.751 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:26.751 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:26.751 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:26.751 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:26.751 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:26.751 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1345387 00:06:26.751 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:26.752 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1345387 00:06:26.752 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:26.752 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1345387 00:06:26.752 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:26.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:26.752 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:26.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:26.752 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:26.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:26.752 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:26.752 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:26.752 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:26.752 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1345387 00:06:26.752 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:26.752 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1345387 00:06:26.752 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:26.752 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1345387 00:06:26.752 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:26.752 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1345387 00:06:26.752 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:26.752 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1345387 00:06:26.752 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:26.752 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:26.752 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:26.752 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:26.752 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:26.752 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:26.752 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:26.752 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1345387 00:06:26.752 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:26.752 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:26.752 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:26.752 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:26.752 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:26.752 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1345387 00:06:26.752 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:26.752 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:26.752 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:26.752 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1345387 00:06:26.752 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:26.752 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1345387 00:06:26.752 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:26.752 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:26.752 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:26.752 11:13:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1345387 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1345387 ']' 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1345387 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1345387 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1345387' 00:06:26.752 killing process with pid 1345387 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1345387 00:06:26.752 11:13:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1345387 00:06:27.013 00:06:27.013 real 0m1.375s 00:06:27.013 user 0m1.535s 00:06:27.013 sys 0m0.384s 00:06:27.013 11:13:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.013 11:13:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 ************************************ 00:06:27.013 END TEST dpdk_mem_utility 00:06:27.013 ************************************ 00:06:27.013 11:13:24 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:27.013 11:13:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:27.013 11:13:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.013 11:13:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 ************************************ 00:06:27.013 START TEST event 00:06:27.013 ************************************ 00:06:27.013 11:13:24 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:27.273 * Looking for test storage... 00:06:27.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:27.273 11:13:24 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:27.273 11:13:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.273 11:13:24 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.274 11:13:24 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:27.274 11:13:24 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.274 11:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 ************************************ 00:06:27.274 START TEST event_perf 00:06:27.274 ************************************ 00:06:27.274 11:13:24 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.274 Running I/O for 1 seconds...[2024-06-10 11:13:24.317710] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:27.274 [2024-06-10 11:13:24.317814] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345648 ] 00:06:27.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.274 [2024-06-10 11:13:24.408210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.274 [2024-06-10 11:13:24.479102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.274 [2024-06-10 11:13:24.479247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.274 [2024-06-10 11:13:24.479397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.274 Running I/O for 1 seconds...[2024-06-10 11:13:24.479398] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.657 00:06:28.657 lcore 0: 193939 00:06:28.657 lcore 1: 193940 00:06:28.657 lcore 2: 193937 00:06:28.657 lcore 3: 193938 00:06:28.657 done. 00:06:28.657 00:06:28.657 real 0m1.235s 00:06:28.657 user 0m4.135s 00:06:28.657 sys 0m0.100s 00:06:28.657 11:13:25 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.657 11:13:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 ************************************ 00:06:28.657 END TEST event_perf 00:06:28.657 ************************************ 00:06:28.657 11:13:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.657 11:13:25 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:28.657 11:13:25 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.657 11:13:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 ************************************ 00:06:28.657 START TEST event_reactor 00:06:28.657 ************************************ 00:06:28.657 11:13:25 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.657 [2024-06-10 11:13:25.627350] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:28.657 [2024-06-10 11:13:25.627451] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345802 ] 00:06:28.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.657 [2024-06-10 11:13:25.715732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.657 [2024-06-10 11:13:25.790974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.038 test_start 00:06:30.038 oneshot 00:06:30.038 tick 100 00:06:30.038 tick 100 00:06:30.038 tick 250 00:06:30.038 tick 100 00:06:30.038 tick 100 00:06:30.038 tick 250 00:06:30.038 tick 100 00:06:30.038 tick 500 00:06:30.038 tick 100 00:06:30.038 tick 100 00:06:30.038 tick 250 00:06:30.038 tick 100 00:06:30.038 tick 100 00:06:30.038 test_end 00:06:30.038 00:06:30.038 real 0m1.233s 00:06:30.038 user 0m1.139s 00:06:30.038 sys 0m0.089s 00:06:30.038 11:13:26 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.038 11:13:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:30.038 ************************************ 00:06:30.038 END TEST event_reactor 00:06:30.038 ************************************ 00:06:30.038 11:13:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.038 11:13:26 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:30.038 11:13:26 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.039 11:13:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.039 ************************************ 00:06:30.039 START TEST event_reactor_perf 00:06:30.039 ************************************ 00:06:30.039 11:13:26 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.039 [2024-06-10 11:13:26.935022] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:30.039 [2024-06-10 11:13:26.935124] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346108 ] 00:06:30.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.039 [2024-06-10 11:13:27.020899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.039 [2024-06-10 11:13:27.086562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.978 test_start 00:06:30.978 test_end 00:06:30.978 Performance: 399476 events per second 00:06:30.978 00:06:30.978 real 0m1.223s 00:06:30.978 user 0m1.139s 00:06:30.978 sys 0m0.080s 00:06:30.978 11:13:28 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.978 11:13:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.978 ************************************ 00:06:30.978 END TEST event_reactor_perf 00:06:30.978 ************************************ 00:06:30.978 11:13:28 event -- event/event.sh@49 -- # uname -s 00:06:30.978 11:13:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.978 11:13:28 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.978 11:13:28 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.978 11:13:28 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.978 11:13:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.239 ************************************ 00:06:31.239 START TEST event_scheduler 00:06:31.239 ************************************ 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:31.239 * Looking for test storage... 00:06:31.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:31.239 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:31.239 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1346450 00:06:31.239 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.239 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:31.239 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1346450 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1346450 ']' 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:31.239 11:13:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.239 [2024-06-10 11:13:28.363061] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:31.239 [2024-06-10 11:13:28.363119] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346450 ] 00:06:31.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.239 [2024-06-10 11:13:28.423857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.499 [2024-06-10 11:13:28.484342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.499 [2024-06-10 11:13:28.484457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.499 [2024-06-10 11:13:28.484604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.499 [2024-06-10 11:13:28.484605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:06:31.499 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 POWER: Env isn't set yet! 00:06:31.499 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:31.499 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:31.499 POWER: Cannot set governor of lcore 0 to userspace 00:06:31.499 POWER: Attempting to initialise PSTAT power management... 00:06:31.499 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:31.499 POWER: Initialized successfully for lcore 0 power management 00:06:31.499 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:31.499 POWER: Initialized successfully for lcore 1 power management 00:06:31.499 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:31.499 POWER: Initialized successfully for lcore 2 power management 00:06:31.499 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:31.499 POWER: Initialized successfully for lcore 3 power management 00:06:31.499 [2024-06-10 11:13:28.570031] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:31.499 [2024-06-10 11:13:28.570042] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:31.499 [2024-06-10 11:13:28.570047] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.499 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 [2024-06-10 11:13:28.628620] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.499 11:13:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 ************************************ 00:06:31.499 START TEST scheduler_create_thread 00:06:31.499 ************************************ 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 2 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 3 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.499 4 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.499 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 5 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 6 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 7 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 8 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 9 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.760 11:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.171 10 00:06:33.171 11:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.171 11:13:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.171 11:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.171 11:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.764 11:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.764 11:13:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.764 11:13:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.764 11:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.764 11:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.705 11:13:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.705 11:13:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.705 11:13:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.705 11:13:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.276 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.276 11:13:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.276 11:13:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.276 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.276 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.847 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.847 00:06:35.847 real 0m4.216s 00:06:35.847 user 0m0.027s 00:06:35.847 sys 0m0.004s 00:06:35.848 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.848 11:13:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.848 ************************************ 00:06:35.848 END TEST scheduler_create_thread 00:06:35.848 ************************************ 00:06:35.848 11:13:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.848 11:13:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1346450 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1346450 ']' 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1346450 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1346450 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1346450' 00:06:35.848 killing process with pid 1346450 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1346450 00:06:35.848 11:13:32 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1346450 00:06:36.108 [2024-06-10 11:13:33.160730] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:36.108 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:36.108 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:36.108 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:36.108 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:36.108 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:36.108 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:36.108 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:36.108 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:36.369 00:06:36.369 real 0m5.127s 00:06:36.369 user 0m10.798s 00:06:36.369 sys 0m0.337s 00:06:36.369 11:13:33 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.369 11:13:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 ************************************ 00:06:36.369 END TEST event_scheduler 00:06:36.369 ************************************ 00:06:36.369 11:13:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:36.369 11:13:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:36.369 11:13:33 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:36.369 11:13:33 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.369 11:13:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 ************************************ 00:06:36.369 START TEST app_repeat 00:06:36.369 ************************************ 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1347383 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1347383' 00:06:36.369 Process app_repeat pid: 1347383 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:36.369 spdk_app_start Round 0 00:06:36.369 11:13:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347383 /var/tmp/spdk-nbd.sock 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1347383 ']' 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.369 11:13:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 [2024-06-10 11:13:33.464530] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:36.369 [2024-06-10 11:13:33.464615] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347383 ] 00:06:36.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.369 [2024-06-10 11:13:33.551003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.629 [2024-06-10 11:13:33.620315] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.629 [2024-06-10 11:13:33.620321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.629 11:13:33 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.629 11:13:33 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:36.629 11:13:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.889 Malloc0 00:06:36.889 11:13:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.889 Malloc1 00:06:37.149 11:13:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.149 /dev/nbd0 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.149 1+0 records in 00:06:37.149 1+0 records out 00:06:37.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242571 s, 16.9 MB/s 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.149 11:13:34 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.149 11:13:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.409 /dev/nbd1 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.409 1+0 records in 00:06:37.409 1+0 records out 00:06:37.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233633 s, 17.5 MB/s 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.409 11:13:34 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.409 11:13:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.670 { 00:06:37.670 "nbd_device": "/dev/nbd0", 00:06:37.670 "bdev_name": "Malloc0" 00:06:37.670 }, 00:06:37.670 { 00:06:37.670 "nbd_device": "/dev/nbd1", 00:06:37.670 "bdev_name": "Malloc1" 00:06:37.670 } 00:06:37.670 ]' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.670 { 00:06:37.670 "nbd_device": "/dev/nbd0", 00:06:37.670 "bdev_name": "Malloc0" 00:06:37.670 }, 00:06:37.670 { 00:06:37.670 "nbd_device": "/dev/nbd1", 00:06:37.670 "bdev_name": "Malloc1" 00:06:37.670 } 00:06:37.670 ]' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.670 /dev/nbd1' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.670 /dev/nbd1' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.670 256+0 records in 00:06:37.670 256+0 records out 00:06:37.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116647 s, 89.9 MB/s 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.670 256+0 records in 00:06:37.670 256+0 records out 00:06:37.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152861 s, 68.6 MB/s 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.670 256+0 records in 00:06:37.670 256+0 records out 00:06:37.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152505 s, 68.8 MB/s 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.670 11:13:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.932 11:13:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.932 11:13:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.194 11:13:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.455 11:13:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.455 11:13:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.715 11:13:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.715 [2024-06-10 11:13:35.838778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.715 [2024-06-10 11:13:35.899816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.715 [2024-06-10 11:13:35.899826] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.715 [2024-06-10 11:13:35.929955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.715 [2024-06-10 11:13:35.929990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.016 11:13:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.016 11:13:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.016 spdk_app_start Round 1 00:06:42.016 11:13:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347383 /var/tmp/spdk-nbd.sock 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1347383 ']' 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:42.016 11:13:38 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:42.016 11:13:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.016 Malloc0 00:06:42.016 11:13:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.277 Malloc1 00:06:42.277 11:13:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.277 11:13:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.277 11:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.277 11:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.277 11:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.277 11:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.278 /dev/nbd0 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.278 11:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.278 11:13:39 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:42.278 11:13:39 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:42.278 11:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:42.278 11:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:42.278 11:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.539 1+0 records in 00:06:42.539 1+0 records out 00:06:42.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017879 s, 22.9 MB/s 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.539 /dev/nbd1 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.539 1+0 records in 00:06:42.539 1+0 records out 00:06:42.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213992 s, 19.1 MB/s 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:42.539 11:13:39 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.539 11:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.800 { 00:06:42.800 "nbd_device": "/dev/nbd0", 00:06:42.800 "bdev_name": "Malloc0" 00:06:42.800 }, 00:06:42.800 { 00:06:42.800 "nbd_device": "/dev/nbd1", 00:06:42.800 "bdev_name": "Malloc1" 00:06:42.800 } 00:06:42.800 ]' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.800 { 00:06:42.800 "nbd_device": "/dev/nbd0", 00:06:42.800 "bdev_name": "Malloc0" 00:06:42.800 }, 00:06:42.800 { 00:06:42.800 "nbd_device": "/dev/nbd1", 00:06:42.800 "bdev_name": "Malloc1" 00:06:42.800 } 00:06:42.800 ]' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.800 /dev/nbd1' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.800 /dev/nbd1' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.800 256+0 records in 00:06:42.800 256+0 records out 00:06:42.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119639 s, 87.6 MB/s 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.800 256+0 records in 00:06:42.800 256+0 records out 00:06:42.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149369 s, 70.2 MB/s 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.800 11:13:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.800 256+0 records in 00:06:42.800 256+0 records out 00:06:42.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160378 s, 65.4 MB/s 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.800 11:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.061 11:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.322 11:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.583 11:13:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.583 11:13:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.844 11:13:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.844 [2024-06-10 11:13:40.993725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.844 [2024-06-10 11:13:41.055058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.844 [2024-06-10 11:13:41.055062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.106 [2024-06-10 11:13:41.085970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.106 [2024-06-10 11:13:41.086003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.692 11:13:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.692 11:13:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.692 spdk_app_start Round 2 00:06:46.692 11:13:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347383 /var/tmp/spdk-nbd.sock 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1347383 ']' 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:46.692 11:13:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.953 11:13:44 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:46.953 11:13:44 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:46.953 11:13:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.213 Malloc0 00:06:47.213 11:13:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.213 Malloc1 00:06:47.474 11:13:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.474 /dev/nbd0 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.474 11:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.474 1+0 records in 00:06:47.474 1+0 records out 00:06:47.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234603 s, 17.5 MB/s 00:06:47.474 11:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.475 11:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:47.475 11:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.475 11:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:47.475 11:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:47.475 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.475 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.475 11:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.736 /dev/nbd1 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.736 1+0 records in 00:06:47.736 1+0 records out 00:06:47.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300622 s, 13.6 MB/s 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:47.736 11:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.736 11:13:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.997 { 00:06:47.997 "nbd_device": "/dev/nbd0", 00:06:47.997 "bdev_name": "Malloc0" 00:06:47.997 }, 00:06:47.997 { 00:06:47.997 "nbd_device": "/dev/nbd1", 00:06:47.997 "bdev_name": "Malloc1" 00:06:47.997 } 00:06:47.997 ]' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.997 { 00:06:47.997 "nbd_device": "/dev/nbd0", 00:06:47.997 "bdev_name": "Malloc0" 00:06:47.997 }, 00:06:47.997 { 00:06:47.997 "nbd_device": "/dev/nbd1", 00:06:47.997 "bdev_name": "Malloc1" 00:06:47.997 } 00:06:47.997 ]' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.997 /dev/nbd1' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.997 /dev/nbd1' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.997 256+0 records in 00:06:47.997 256+0 records out 00:06:47.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121035 s, 86.6 MB/s 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.997 256+0 records in 00:06:47.997 256+0 records out 00:06:47.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151262 s, 69.3 MB/s 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.997 256+0 records in 00:06:47.997 256+0 records out 00:06:47.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158618 s, 66.1 MB/s 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.997 11:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.258 11:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.518 11:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.779 11:13:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.779 11:13:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.040 11:13:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.040 [2024-06-10 11:13:46.135184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.040 [2024-06-10 11:13:46.196473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.040 [2024-06-10 11:13:46.196477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.040 [2024-06-10 11:13:46.226517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.040 [2024-06-10 11:13:46.226551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.340 11:13:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1347383 /var/tmp/spdk-nbd.sock 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1347383 ']' 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:52.340 11:13:49 event.app_repeat -- event/event.sh@39 -- # killprocess 1347383 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1347383 ']' 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1347383 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1347383 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1347383' 00:06:52.340 killing process with pid 1347383 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1347383 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1347383 00:06:52.340 spdk_app_start is called in Round 0. 00:06:52.340 Shutdown signal received, stop current app iteration 00:06:52.340 Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 reinitialization... 00:06:52.340 spdk_app_start is called in Round 1. 00:06:52.340 Shutdown signal received, stop current app iteration 00:06:52.340 Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 reinitialization... 00:06:52.340 spdk_app_start is called in Round 2. 00:06:52.340 Shutdown signal received, stop current app iteration 00:06:52.340 Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 reinitialization... 00:06:52.340 spdk_app_start is called in Round 3. 00:06:52.340 Shutdown signal received, stop current app iteration 00:06:52.340 11:13:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:52.340 11:13:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:52.340 00:06:52.340 real 0m15.937s 00:06:52.340 user 0m35.017s 00:06:52.340 sys 0m2.337s 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.340 11:13:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.340 ************************************ 00:06:52.340 END TEST app_repeat 00:06:52.340 ************************************ 00:06:52.340 11:13:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:52.340 11:13:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:52.340 11:13:49 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:52.340 11:13:49 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.340 11:13:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.340 ************************************ 00:06:52.340 START TEST cpu_locks 00:06:52.340 ************************************ 00:06:52.340 11:13:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:52.340 * Looking for test storage... 00:06:52.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:52.340 11:13:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:52.340 11:13:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:52.340 11:13:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:52.340 11:13:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:52.341 11:13:49 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:52.341 11:13:49 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.341 11:13:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.602 ************************************ 00:06:52.602 START TEST default_locks 00:06:52.602 ************************************ 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1350342 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1350342 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1350342 ']' 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:52.602 11:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.602 [2024-06-10 11:13:49.631059] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:52.602 [2024-06-10 11:13:49.631122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350342 ] 00:06:52.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.602 [2024-06-10 11:13:49.714395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.602 [2024-06-10 11:13:49.783079] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.559 11:13:50 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.559 11:13:50 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:53.559 11:13:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1350342 00:06:53.559 11:13:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1350342 00:06:53.559 11:13:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.819 lslocks: write error 00:06:53.819 11:13:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1350342 00:06:53.819 11:13:50 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1350342 ']' 00:06:53.819 11:13:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1350342 00:06:53.819 11:13:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:53.819 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:53.819 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1350342' 00:06:54.080 killing process with pid 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1350342 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1350342 ']' 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1350342) - No such process 00:06:54.080 ERROR: process (pid: 1350342) is no longer running 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.080 00:06:54.080 real 0m1.677s 00:06:54.080 user 0m1.816s 00:06:54.080 sys 0m0.570s 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.080 11:13:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.080 ************************************ 00:06:54.080 END TEST default_locks 00:06:54.080 ************************************ 00:06:54.080 11:13:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:54.080 11:13:51 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.080 11:13:51 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.080 11:13:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.342 ************************************ 00:06:54.342 START TEST default_locks_via_rpc 00:06:54.342 ************************************ 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1350655 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1350655 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1350655 ']' 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:54.342 11:13:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.342 [2024-06-10 11:13:51.385012] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:54.342 [2024-06-10 11:13:51.385063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350655 ] 00:06:54.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.342 [2024-06-10 11:13:51.465741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.342 [2024-06-10 11:13:51.530808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1350655 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1350655 00:06:55.286 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1350655 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1350655 ']' 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1350655 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.548 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1350655 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1350655' 00:06:55.810 killing process with pid 1350655 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1350655 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1350655 00:06:55.810 00:06:55.810 real 0m1.658s 00:06:55.810 user 0m1.813s 00:06:55.810 sys 0m0.555s 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.810 11:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.810 ************************************ 00:06:55.810 END TEST default_locks_via_rpc 00:06:55.810 ************************************ 00:06:55.810 11:13:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:55.810 11:13:53 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:55.810 11:13:53 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:55.810 11:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.071 ************************************ 00:06:56.071 START TEST non_locking_app_on_locked_coremask 00:06:56.071 ************************************ 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1350950 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1350950 /var/tmp/spdk.sock 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1350950 ']' 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.071 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.072 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.072 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.072 [2024-06-10 11:13:53.118288] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:56.072 [2024-06-10 11:13:53.118340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350950 ] 00:06:56.072 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.072 [2024-06-10 11:13:53.201716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.072 [2024-06-10 11:13:53.270028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1351081 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1351081 /var/tmp/spdk2.sock 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1351081 ']' 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:57.014 11:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.014 [2024-06-10 11:13:54.008122] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:57.014 [2024-06-10 11:13:54.008171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351081 ] 00:06:57.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.014 [2024-06-10 11:13:54.100367] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.014 [2024-06-10 11:13:54.100395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.014 [2024-06-10 11:13:54.226551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.765 11:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.765 11:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:57.765 11:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1350950 00:06:57.765 11:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1350950 00:06:57.765 11:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.335 lslocks: write error 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1350950 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1350950 ']' 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1350950 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1350950 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1350950' 00:06:58.335 killing process with pid 1350950 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1350950 00:06:58.335 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1350950 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1351081 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1351081 ']' 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1351081 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1351081 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1351081' 00:06:58.907 killing process with pid 1351081 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1351081 00:06:58.907 11:13:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1351081 00:06:58.907 00:06:58.907 real 0m3.041s 00:06:58.907 user 0m3.451s 00:06:58.907 sys 0m0.881s 00:06:58.907 11:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.907 11:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.907 ************************************ 00:06:58.907 END TEST non_locking_app_on_locked_coremask 00:06:58.907 ************************************ 00:06:59.192 11:13:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.192 11:13:56 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:59.192 11:13:56 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.192 11:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.192 ************************************ 00:06:59.192 START TEST locking_app_on_unlocked_coremask 00:06:59.192 ************************************ 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1351439 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1351439 /var/tmp/spdk.sock 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1351439 ']' 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:59.192 11:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.192 [2024-06-10 11:13:56.228983] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:06:59.192 [2024-06-10 11:13:56.229032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351439 ] 00:06:59.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.193 [2024-06-10 11:13:56.310885] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.193 [2024-06-10 11:13:56.310913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.193 [2024-06-10 11:13:56.376642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1351727 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1351727 /var/tmp/spdk2.sock 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1351727 ']' 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:00.140 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 [2024-06-10 11:13:57.132372] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:00.140 [2024-06-10 11:13:57.132426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351727 ] 00:07:00.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.140 [2024-06-10 11:13:57.221640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.140 [2024-06-10 11:13:57.347919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.080 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:01.080 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:01.080 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1351727 00:07:01.080 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1351727 00:07:01.080 11:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.340 lslocks: write error 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1351439 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1351439 ']' 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1351439 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1351439 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1351439' 00:07:01.340 killing process with pid 1351439 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1351439 00:07:01.340 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1351439 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1351727 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1351727 ']' 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1351727 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1351727 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1351727' 00:07:01.911 killing process with pid 1351727 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1351727 00:07:01.911 11:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1351727 00:07:02.171 00:07:02.171 real 0m3.016s 00:07:02.171 user 0m3.406s 00:07:02.171 sys 0m0.864s 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.171 ************************************ 00:07:02.171 END TEST locking_app_on_unlocked_coremask 00:07:02.171 ************************************ 00:07:02.171 11:13:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.171 11:13:59 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.171 11:13:59 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.171 11:13:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.171 ************************************ 00:07:02.171 START TEST locking_app_on_locked_coremask 00:07:02.171 ************************************ 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1352073 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1352073 /var/tmp/spdk.sock 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1352073 ']' 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:02.171 11:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.171 [2024-06-10 11:13:59.315988] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:02.171 [2024-06-10 11:13:59.316040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352073 ] 00:07:02.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.432 [2024-06-10 11:13:59.398564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.432 [2024-06-10 11:13:59.466737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.001 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1352203 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1352203 /var/tmp/spdk2.sock 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1352203 /var/tmp/spdk2.sock 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1352203 /var/tmp/spdk2.sock 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1352203 ']' 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:03.002 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.002 [2024-06-10 11:14:00.185850] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:03.002 [2024-06-10 11:14:00.185901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352203 ] 00:07:03.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.262 [2024-06-10 11:14:00.283995] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1352073 has claimed it. 00:07:03.262 [2024-06-10 11:14:00.284034] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1352203) - No such process 00:07:03.832 ERROR: process (pid: 1352203) is no longer running 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1352073 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1352073 00:07:03.832 11:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.092 lslocks: write error 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1352073 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1352073 ']' 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1352073 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1352073 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1352073' 00:07:04.092 killing process with pid 1352073 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1352073 00:07:04.092 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1352073 00:07:04.352 00:07:04.352 real 0m2.110s 00:07:04.352 user 0m2.393s 00:07:04.352 sys 0m0.567s 00:07:04.352 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:04.352 11:14:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 END TEST locking_app_on_locked_coremask 00:07:04.352 ************************************ 00:07:04.352 11:14:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.352 11:14:01 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:04.352 11:14:01 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.352 11:14:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 START TEST locking_overlapped_coremask 00:07:04.352 ************************************ 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1352414 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1352414 /var/tmp/spdk.sock 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1352414 ']' 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:04.352 11:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 [2024-06-10 11:14:01.499934] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:04.352 [2024-06-10 11:14:01.499982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352414 ] 00:07:04.352 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.612 [2024-06-10 11:14:01.582955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.612 [2024-06-10 11:14:01.646778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.612 [2024-06-10 11:14:01.646913] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.612 [2024-06-10 11:14:01.647060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1352718 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1352718 /var/tmp/spdk2.sock 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1352718 /var/tmp/spdk2.sock 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1352718 /var/tmp/spdk2.sock 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1352718 ']' 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:05.182 11:14:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.182 [2024-06-10 11:14:02.377077] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:05.182 [2024-06-10 11:14:02.377131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352718 ] 00:07:05.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.442 [2024-06-10 11:14:02.458572] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1352414 has claimed it. 00:07:05.442 [2024-06-10 11:14:02.458604] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1352718) - No such process 00:07:06.012 ERROR: process (pid: 1352718) is no longer running 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1352414 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1352414 ']' 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1352414 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1352414 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:06.012 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:06.013 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1352414' 00:07:06.013 killing process with pid 1352414 00:07:06.013 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1352414 00:07:06.013 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1352414 00:07:06.273 00:07:06.273 real 0m1.836s 00:07:06.273 user 0m5.277s 00:07:06.273 sys 0m0.385s 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 ************************************ 00:07:06.273 END TEST locking_overlapped_coremask 00:07:06.273 ************************************ 00:07:06.273 11:14:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:06.273 11:14:03 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:06.273 11:14:03 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:06.273 11:14:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 ************************************ 00:07:06.273 START TEST locking_overlapped_coremask_via_rpc 00:07:06.273 ************************************ 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1352777 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1352777 /var/tmp/spdk.sock 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1352777 ']' 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:06.273 11:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 [2024-06-10 11:14:03.408530] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:06.273 [2024-06-10 11:14:03.408580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352777 ] 00:07:06.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.273 [2024-06-10 11:14:03.490596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.273 [2024-06-10 11:14:03.490624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.534 [2024-06-10 11:14:03.558505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.534 [2024-06-10 11:14:03.558616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.534 [2024-06-10 11:14:03.558619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1353060 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1353060 /var/tmp/spdk2.sock 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1353060 ']' 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.104 11:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.104 [2024-06-10 11:14:04.267897] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:07.105 [2024-06-10 11:14:04.267950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353060 ] 00:07:07.105 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.365 [2024-06-10 11:14:04.346376] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.365 [2024-06-10 11:14:04.346400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.365 [2024-06-10 11:14:04.452348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.365 [2024-06-10 11:14:04.455939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.365 [2024-06-10 11:14:04.455941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.938 [2024-06-10 11:14:05.126887] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1352777 has claimed it. 00:07:07.938 request: 00:07:07.938 { 00:07:07.938 "method": "framework_enable_cpumask_locks", 00:07:07.938 "req_id": 1 00:07:07.938 } 00:07:07.938 Got JSON-RPC error response 00:07:07.938 response: 00:07:07.938 { 00:07:07.938 "code": -32603, 00:07:07.938 "message": "Failed to claim CPU core: 2" 00:07:07.938 } 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1352777 /var/tmp/spdk.sock 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1352777 ']' 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.938 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.939 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1353060 /var/tmp/spdk2.sock 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1353060 ']' 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:08.200 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.460 00:07:08.460 real 0m2.163s 00:07:08.460 user 0m0.920s 00:07:08.460 sys 0m0.165s 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.460 11:14:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.460 ************************************ 00:07:08.460 END TEST locking_overlapped_coremask_via_rpc 00:07:08.460 ************************************ 00:07:08.460 11:14:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.460 11:14:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1352777 ]] 00:07:08.460 11:14:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1352777 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1352777 ']' 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1352777 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1352777 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:08.460 11:14:05 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1352777' 00:07:08.461 killing process with pid 1352777 00:07:08.461 11:14:05 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1352777 00:07:08.461 11:14:05 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1352777 00:07:08.722 11:14:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1353060 ]] 00:07:08.722 11:14:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1353060 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1353060 ']' 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1353060 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1353060 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1353060' 00:07:08.722 killing process with pid 1353060 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1353060 00:07:08.722 11:14:05 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1353060 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1352777 ]] 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1352777 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1352777 ']' 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1352777 00:07:08.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1352777) - No such process 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1352777 is not found' 00:07:08.983 Process with pid 1352777 is not found 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1353060 ]] 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1353060 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1353060 ']' 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1353060 00:07:08.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1353060) - No such process 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1353060 is not found' 00:07:08.983 Process with pid 1353060 is not found 00:07:08.983 11:14:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.983 00:07:08.983 real 0m16.623s 00:07:08.983 user 0m29.238s 00:07:08.983 sys 0m4.866s 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.983 11:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.983 ************************************ 00:07:08.983 END TEST cpu_locks 00:07:08.983 ************************************ 00:07:08.983 00:07:08.983 real 0m41.950s 00:07:08.983 user 1m21.671s 00:07:08.983 sys 0m8.208s 00:07:08.983 11:14:06 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.983 11:14:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.983 ************************************ 00:07:08.983 END TEST event 00:07:08.983 ************************************ 00:07:08.983 11:14:06 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:08.983 11:14:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:08.983 11:14:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.983 11:14:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.983 ************************************ 00:07:08.983 START TEST thread 00:07:08.983 ************************************ 00:07:08.983 11:14:06 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:09.244 * Looking for test storage... 00:07:09.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:09.244 11:14:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.244 11:14:06 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:09.244 11:14:06 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:09.244 11:14:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.244 ************************************ 00:07:09.244 START TEST thread_poller_perf 00:07:09.244 ************************************ 00:07:09.244 11:14:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.244 [2024-06-10 11:14:06.335361] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:09.244 [2024-06-10 11:14:06.335468] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353469 ] 00:07:09.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.244 [2024-06-10 11:14:06.425883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.505 [2024-06-10 11:14:06.493832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.505 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:10.444 ====================================== 00:07:10.444 busy:2609356952 (cyc) 00:07:10.444 total_run_count: 310000 00:07:10.444 tsc_hz: 2600000000 (cyc) 00:07:10.444 ====================================== 00:07:10.444 poller_cost: 8417 (cyc), 3237 (nsec) 00:07:10.444 00:07:10.444 real 0m1.241s 00:07:10.444 user 0m1.144s 00:07:10.444 sys 0m0.093s 00:07:10.444 11:14:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.444 11:14:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.444 ************************************ 00:07:10.444 END TEST thread_poller_perf 00:07:10.444 ************************************ 00:07:10.444 11:14:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.444 11:14:07 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:10.444 11:14:07 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:10.444 11:14:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.444 ************************************ 00:07:10.444 START TEST thread_poller_perf 00:07:10.444 ************************************ 00:07:10.444 11:14:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.444 [2024-06-10 11:14:07.652257] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:10.444 [2024-06-10 11:14:07.652354] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353786 ] 00:07:10.705 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.705 [2024-06-10 11:14:07.737497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.705 [2024-06-10 11:14:07.803186] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.705 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:11.645 ====================================== 00:07:11.645 busy:2602176228 (cyc) 00:07:11.645 total_run_count: 4135000 00:07:11.645 tsc_hz: 2600000000 (cyc) 00:07:11.645 ====================================== 00:07:11.645 poller_cost: 629 (cyc), 241 (nsec) 00:07:11.645 00:07:11.645 real 0m1.225s 00:07:11.645 user 0m1.132s 00:07:11.645 sys 0m0.090s 00:07:11.645 11:14:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.645 11:14:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.645 ************************************ 00:07:11.645 END TEST thread_poller_perf 00:07:11.645 ************************************ 00:07:11.920 11:14:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.920 00:07:11.921 real 0m2.716s 00:07:11.921 user 0m2.380s 00:07:11.921 sys 0m0.345s 00:07:11.921 11:14:08 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.921 11:14:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.921 ************************************ 00:07:11.921 END TEST thread 00:07:11.921 ************************************ 00:07:11.921 11:14:08 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:11.921 11:14:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.921 11:14:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.921 11:14:08 -- common/autotest_common.sh@10 -- # set +x 00:07:11.921 ************************************ 00:07:11.921 START TEST accel 00:07:11.921 ************************************ 00:07:11.921 11:14:08 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:11.921 * Looking for test storage... 00:07:11.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:11.921 11:14:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:11.921 11:14:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:11.921 11:14:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.921 11:14:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1353965 00:07:11.921 11:14:09 accel -- accel/accel.sh@63 -- # waitforlisten 1353965 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@830 -- # '[' -z 1353965 ']' 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.921 11:14:09 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:11.921 11:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.921 11:14:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:11.921 11:14:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.921 11:14:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.921 11:14:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.921 11:14:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.921 11:14:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.921 11:14:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.921 11:14:09 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.921 [2024-06-10 11:14:09.141930] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:11.921 [2024-06-10 11:14:09.142002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353965 ] 00:07:12.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.186 [2024-06-10 11:14:09.228519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.186 [2024-06-10 11:14:09.297443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.757 11:14:09 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:12.757 11:14:09 accel -- common/autotest_common.sh@863 -- # return 0 00:07:12.757 11:14:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:12.757 11:14:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:12.757 11:14:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:12.757 11:14:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:12.757 11:14:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:12.757 11:14:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:12.757 11:14:09 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:12.757 11:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.757 11:14:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:12.757 11:14:09 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.017 11:14:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.017 11:14:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.017 11:14:10 accel -- accel/accel.sh@75 -- # killprocess 1353965 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@949 -- # '[' -z 1353965 ']' 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@953 -- # kill -0 1353965 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@954 -- # uname 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1353965 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1353965' 00:07:13.017 killing process with pid 1353965 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@968 -- # kill 1353965 00:07:13.017 11:14:10 accel -- common/autotest_common.sh@973 -- # wait 1353965 00:07:13.278 11:14:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:13.278 11:14:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 11:14:10 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:13.278 11:14:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:13.278 11:14:10 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.278 11:14:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 11:14:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.278 11:14:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 ************************************ 00:07:13.278 START TEST accel_missing_filename 00:07:13.278 ************************************ 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.278 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:13.278 11:14:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:13.278 [2024-06-10 11:14:10.457912] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:13.278 [2024-06-10 11:14:10.458012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354205 ] 00:07:13.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.538 [2024-06-10 11:14:10.542337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.538 [2024-06-10 11:14:10.609393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.538 [2024-06-10 11:14:10.640536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.538 [2024-06-10 11:14:10.676911] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:13.538 A filename is required. 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:13.538 00:07:13.538 real 0m0.302s 00:07:13.538 user 0m0.218s 00:07:13.538 sys 0m0.123s 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.538 11:14:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 ************************************ 00:07:13.538 END TEST accel_missing_filename 00:07:13.538 ************************************ 00:07:13.538 11:14:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.538 11:14:10 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:13.538 11:14:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.538 11:14:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.799 ************************************ 00:07:13.799 START TEST accel_compress_verify 00:07:13.799 ************************************ 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.799 11:14:10 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.799 11:14:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.799 [2024-06-10 11:14:10.819158] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:13.799 [2024-06-10 11:14:10.819219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354274 ] 00:07:13.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.799 [2024-06-10 11:14:10.904361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.799 [2024-06-10 11:14:10.970037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.799 [2024-06-10 11:14:11.000659] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.060 [2024-06-10 11:14:11.036816] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:14.060 00:07:14.060 Compression does not support the verify option, aborting. 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:14.060 00:07:14.060 real 0m0.296s 00:07:14.060 user 0m0.215s 00:07:14.060 sys 0m0.119s 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.060 11:14:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 ************************************ 00:07:14.060 END TEST accel_compress_verify 00:07:14.060 ************************************ 00:07:14.060 11:14:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 ************************************ 00:07:14.060 START TEST accel_wrong_workload 00:07:14.060 ************************************ 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:14.060 11:14:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:14.060 Unsupported workload type: foobar 00:07:14.060 [2024-06-10 11:14:11.178009] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:14.060 accel_perf options: 00:07:14.060 [-h help message] 00:07:14.060 [-q queue depth per core] 00:07:14.060 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:14.060 [-T number of threads per core 00:07:14.060 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:14.060 [-t time in seconds] 00:07:14.060 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:14.060 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:14.060 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:14.060 [-l for compress/decompress workloads, name of uncompressed input file 00:07:14.060 [-S for crc32c workload, use this seed value (default 0) 00:07:14.060 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:14.060 [-f for fill workload, use this BYTE value (default 255) 00:07:14.060 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:14.060 [-y verify result if this switch is on] 00:07:14.060 [-a tasks to allocate per core (default: same value as -q)] 00:07:14.060 Can be used to spread operations across a wider range of memory. 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:14.060 00:07:14.060 real 0m0.034s 00:07:14.060 user 0m0.020s 00:07:14.060 sys 0m0.014s 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.060 11:14:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 ************************************ 00:07:14.060 END TEST accel_wrong_workload 00:07:14.060 ************************************ 00:07:14.060 Error: writing output failed: Broken pipe 00:07:14.060 11:14:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.060 11:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 ************************************ 00:07:14.060 START TEST accel_negative_buffers 00:07:14.060 ************************************ 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.060 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:14.060 11:14:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:14.060 -x option must be non-negative. 00:07:14.061 [2024-06-10 11:14:11.275851] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:14.061 accel_perf options: 00:07:14.061 [-h help message] 00:07:14.061 [-q queue depth per core] 00:07:14.061 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:14.061 [-T number of threads per core 00:07:14.061 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:14.061 [-t time in seconds] 00:07:14.061 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:14.061 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:14.061 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:14.061 [-l for compress/decompress workloads, name of uncompressed input file 00:07:14.061 [-S for crc32c workload, use this seed value (default 0) 00:07:14.061 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:14.061 [-f for fill workload, use this BYTE value (default 255) 00:07:14.061 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:14.061 [-y verify result if this switch is on] 00:07:14.061 [-a tasks to allocate per core (default: same value as -q)] 00:07:14.061 Can be used to spread operations across a wider range of memory. 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:14.061 00:07:14.061 real 0m0.034s 00:07:14.061 user 0m0.021s 00:07:14.061 sys 0m0.012s 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.061 11:14:11 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:14.061 ************************************ 00:07:14.061 END TEST accel_negative_buffers 00:07:14.061 ************************************ 00:07:14.321 Error: writing output failed: Broken pipe 00:07:14.321 11:14:11 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:14.321 11:14:11 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:14.321 11:14:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.321 11:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.321 ************************************ 00:07:14.321 START TEST accel_crc32c 00:07:14.321 ************************************ 00:07:14.321 11:14:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:14.321 11:14:11 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:14.321 [2024-06-10 11:14:11.385544] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:14.321 [2024-06-10 11:14:11.385635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354575 ] 00:07:14.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.321 [2024-06-10 11:14:11.470635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.321 [2024-06-10 11:14:11.534207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.581 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.582 11:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:15.521 11:14:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.521 00:07:15.521 real 0m1.299s 00:07:15.521 user 0m1.178s 00:07:15.521 sys 0m0.124s 00:07:15.521 11:14:12 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:15.521 11:14:12 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:15.521 ************************************ 00:07:15.521 END TEST accel_crc32c 00:07:15.521 ************************************ 00:07:15.521 11:14:12 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:15.521 11:14:12 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:15.521 11:14:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.521 11:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.521 ************************************ 00:07:15.521 START TEST accel_crc32c_C2 00:07:15.521 ************************************ 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:15.521 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:15.781 [2024-06-10 11:14:12.745954] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:15.781 [2024-06-10 11:14:12.746020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354728 ] 00:07:15.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.781 [2024-06-10 11:14:12.830211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.782 [2024-06-10 11:14:12.897899] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.782 11:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.165 00:07:17.165 real 0m1.300s 00:07:17.165 user 0m1.180s 00:07:17.165 sys 0m0.122s 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.165 11:14:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:17.165 ************************************ 00:07:17.165 END TEST accel_crc32c_C2 00:07:17.165 ************************************ 00:07:17.165 11:14:14 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:17.165 11:14:14 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:17.165 11:14:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.165 11:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.165 ************************************ 00:07:17.165 START TEST accel_copy 00:07:17.165 ************************************ 00:07:17.165 11:14:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.165 [2024-06-10 11:14:14.109758] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:17.165 [2024-06-10 11:14:14.109818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354934 ] 00:07:17.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.165 [2024-06-10 11:14:14.194736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.165 [2024-06-10 11:14:14.261807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:17.165 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.166 11:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:18.548 11:14:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.548 00:07:18.548 real 0m1.302s 00:07:18.548 user 0m1.182s 00:07:18.548 sys 0m0.120s 00:07:18.548 11:14:15 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.548 11:14:15 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.548 ************************************ 00:07:18.548 END TEST accel_copy 00:07:18.548 ************************************ 00:07:18.548 11:14:15 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.548 11:14:15 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:18.548 11:14:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.548 11:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.548 ************************************ 00:07:18.548 START TEST accel_fill 00:07:18.548 ************************************ 00:07:18.548 11:14:15 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.548 11:14:15 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:18.548 11:14:15 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:18.548 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.548 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.548 11:14:15 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:18.549 [2024-06-10 11:14:15.468877] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:18.549 [2024-06-10 11:14:15.468931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355250 ] 00:07:18.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.549 [2024-06-10 11:14:15.551695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.549 [2024-06-10 11:14:15.613229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.549 11:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:19.933 11:14:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.933 00:07:19.933 real 0m1.288s 00:07:19.933 user 0m1.173s 00:07:19.933 sys 0m0.116s 00:07:19.933 11:14:16 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.933 11:14:16 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:19.933 ************************************ 00:07:19.933 END TEST accel_fill 00:07:19.933 ************************************ 00:07:19.933 11:14:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:19.933 11:14:16 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:19.933 11:14:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.933 11:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.933 ************************************ 00:07:19.933 START TEST accel_copy_crc32c 00:07:19.933 ************************************ 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:19.933 11:14:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:19.933 [2024-06-10 11:14:16.826401] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:19.933 [2024-06-10 11:14:16.826463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355541 ] 00:07:19.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.933 [2024-06-10 11:14:16.911672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.933 [2024-06-10 11:14:16.978260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.933 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.934 11:14:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.874 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.135 00:07:21.135 real 0m1.300s 00:07:21.135 user 0m1.174s 00:07:21.135 sys 0m0.128s 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:21.135 11:14:18 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 ************************************ 00:07:21.135 END TEST accel_copy_crc32c 00:07:21.135 ************************************ 00:07:21.135 11:14:18 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.135 11:14:18 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:21.135 11:14:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.135 11:14:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 ************************************ 00:07:21.135 START TEST accel_copy_crc32c_C2 00:07:21.135 ************************************ 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.135 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:21.135 [2024-06-10 11:14:18.189502] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:21.135 [2024-06-10 11:14:18.189565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355629 ] 00:07:21.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.135 [2024-06-10 11:14:18.274514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.135 [2024-06-10 11:14:18.344551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.397 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.398 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 11:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.391 00:07:22.391 real 0m1.304s 00:07:22.391 user 0m1.186s 00:07:22.391 sys 0m0.120s 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.391 11:14:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:22.391 ************************************ 00:07:22.391 END TEST accel_copy_crc32c_C2 00:07:22.391 ************************************ 00:07:22.391 11:14:19 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:22.391 11:14:19 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:22.391 11:14:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.391 11:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.391 ************************************ 00:07:22.391 START TEST accel_dualcast 00:07:22.391 ************************************ 00:07:22.391 11:14:19 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:22.391 11:14:19 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:22.391 [2024-06-10 11:14:19.558600] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:22.391 [2024-06-10 11:14:19.558691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355928 ] 00:07:22.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.651 [2024-06-10 11:14:19.641554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.651 [2024-06-10 11:14:19.705479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:22.651 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.652 11:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.034 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.034 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.034 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.034 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:24.035 11:14:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.035 00:07:24.035 real 0m1.296s 00:07:24.035 user 0m1.180s 00:07:24.035 sys 0m0.117s 00:07:24.035 11:14:20 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:24.035 11:14:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:24.035 ************************************ 00:07:24.035 END TEST accel_dualcast 00:07:24.035 ************************************ 00:07:24.035 11:14:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:24.035 11:14:20 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:24.035 11:14:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:24.035 11:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.035 ************************************ 00:07:24.035 START TEST accel_compare 00:07:24.035 ************************************ 00:07:24.035 11:14:20 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:24.035 11:14:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:24.035 [2024-06-10 11:14:20.919304] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:24.035 [2024-06-10 11:14:20.919393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356245 ] 00:07:24.035 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.035 [2024-06-10 11:14:21.002671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.035 [2024-06-10 11:14:21.066839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.035 11:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:24.976 11:14:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.976 00:07:24.976 real 0m1.296s 00:07:24.976 user 0m1.181s 00:07:24.976 sys 0m0.116s 00:07:24.976 11:14:22 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:24.976 11:14:22 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:24.976 ************************************ 00:07:24.976 END TEST accel_compare 00:07:24.976 ************************************ 00:07:25.235 11:14:22 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.235 11:14:22 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:25.235 11:14:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.235 11:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.235 ************************************ 00:07:25.235 START TEST accel_xor 00:07:25.235 ************************************ 00:07:25.235 11:14:22 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:25.235 11:14:22 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:25.235 [2024-06-10 11:14:22.289122] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:25.235 [2024-06-10 11:14:22.289214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356560 ] 00:07:25.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.235 [2024-06-10 11:14:22.373449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.235 [2024-06-10 11:14:22.435483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.494 11:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.432 00:07:26.432 real 0m1.297s 00:07:26.432 user 0m1.186s 00:07:26.432 sys 0m0.113s 00:07:26.432 11:14:23 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.432 11:14:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 ************************************ 00:07:26.432 END TEST accel_xor 00:07:26.432 ************************************ 00:07:26.432 11:14:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:26.432 11:14:23 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:26.432 11:14:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.432 11:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 ************************************ 00:07:26.432 START TEST accel_xor 00:07:26.432 ************************************ 00:07:26.432 11:14:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:26.432 11:14:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:26.432 [2024-06-10 11:14:23.654565] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:26.432 [2024-06-10 11:14:23.654627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356667 ] 00:07:26.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.693 [2024-06-10 11:14:23.740586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.693 [2024-06-10 11:14:23.814200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.693 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.694 11:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:28.078 11:14:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.078 00:07:28.078 real 0m1.309s 00:07:28.078 user 0m1.186s 00:07:28.078 sys 0m0.125s 00:07:28.078 11:14:24 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.078 11:14:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 ************************************ 00:07:28.078 END TEST accel_xor 00:07:28.078 ************************************ 00:07:28.078 11:14:24 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:28.078 11:14:24 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:28.078 11:14:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.078 11:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 ************************************ 00:07:28.078 START TEST accel_dif_verify 00:07:28.078 ************************************ 00:07:28.078 11:14:24 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:28.078 11:14:24 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:28.078 [2024-06-10 11:14:25.015034] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:28.078 [2024-06-10 11:14:25.015096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356920 ] 00:07:28.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.078 [2024-06-10 11:14:25.099559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.078 [2024-06-10 11:14:25.165034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.078 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.079 11:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:29.462 11:14:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.462 00:07:29.462 real 0m1.299s 00:07:29.462 user 0m1.186s 00:07:29.462 sys 0m0.114s 00:07:29.462 11:14:26 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:29.462 11:14:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 ************************************ 00:07:29.462 END TEST accel_dif_verify 00:07:29.462 ************************************ 00:07:29.462 11:14:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:29.462 11:14:26 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:29.462 11:14:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.462 11:14:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 ************************************ 00:07:29.462 START TEST accel_dif_generate 00:07:29.462 ************************************ 00:07:29.462 11:14:26 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:29.462 [2024-06-10 11:14:26.381923] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:29.462 [2024-06-10 11:14:26.381984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357237 ] 00:07:29.462 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.462 [2024-06-10 11:14:26.466720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.462 [2024-06-10 11:14:26.531434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:29.462 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.463 11:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:30.846 11:14:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.846 00:07:30.846 real 0m1.297s 00:07:30.846 user 0m1.181s 00:07:30.846 sys 0m0.118s 00:07:30.846 11:14:27 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.846 11:14:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:30.846 ************************************ 00:07:30.846 END TEST accel_dif_generate 00:07:30.846 ************************************ 00:07:30.846 11:14:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.846 11:14:27 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:30.846 11:14:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.846 11:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.846 ************************************ 00:07:30.846 START TEST accel_dif_generate_copy 00:07:30.846 ************************************ 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:30.846 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:30.847 [2024-06-10 11:14:27.758543] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:30.847 [2024-06-10 11:14:27.758645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357556 ] 00:07:30.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.847 [2024-06-10 11:14:27.850788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.847 [2024-06-10 11:14:27.916174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.847 11:14:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.233 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.234 00:07:32.234 real 0m1.309s 00:07:32.234 user 0m1.189s 00:07:32.234 sys 0m0.123s 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.234 11:14:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.234 ************************************ 00:07:32.234 END TEST accel_dif_generate_copy 00:07:32.234 ************************************ 00:07:32.234 11:14:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:32.234 11:14:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.234 11:14:29 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:32.234 11:14:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.234 11:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.234 ************************************ 00:07:32.234 START TEST accel_comp 00:07:32.234 ************************************ 00:07:32.234 11:14:29 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:32.234 [2024-06-10 11:14:29.137358] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:32.234 [2024-06-10 11:14:29.137451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357650 ] 00:07:32.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.234 [2024-06-10 11:14:29.221269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.234 [2024-06-10 11:14:29.288490] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.234 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 11:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:33.620 11:14:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.620 00:07:33.620 real 0m1.305s 00:07:33.620 user 0m1.191s 00:07:33.620 sys 0m0.116s 00:07:33.620 11:14:30 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.620 11:14:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:33.620 ************************************ 00:07:33.620 END TEST accel_comp 00:07:33.620 ************************************ 00:07:33.620 11:14:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.620 11:14:30 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:33.620 11:14:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.620 11:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.620 ************************************ 00:07:33.620 START TEST accel_decomp 00:07:33.620 ************************************ 00:07:33.620 11:14:30 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:33.620 11:14:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:33.620 [2024-06-10 11:14:30.486840] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:33.621 [2024-06-10 11:14:30.486901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357912 ] 00:07:33.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.621 [2024-06-10 11:14:30.571759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.621 [2024-06-10 11:14:30.637583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.621 11:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.563 11:14:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.563 00:07:34.563 real 0m1.302s 00:07:34.563 user 0m1.181s 00:07:34.563 sys 0m0.123s 00:07:34.563 11:14:31 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.563 11:14:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 ************************************ 00:07:34.563 END TEST accel_decomp 00:07:34.563 ************************************ 00:07:34.824 11:14:31 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.824 11:14:31 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:34.824 11:14:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.824 11:14:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.824 ************************************ 00:07:34.825 START TEST accel_decomp_full 00:07:34.825 ************************************ 00:07:34.825 11:14:31 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:34.825 11:14:31 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:34.825 [2024-06-10 11:14:31.858289] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:34.825 [2024-06-10 11:14:31.858352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358233 ] 00:07:34.825 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.825 [2024-06-10 11:14:31.954389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.825 [2024-06-10 11:14:32.031093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.086 11:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.027 11:14:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.027 00:07:36.027 real 0m1.337s 00:07:36.027 user 0m1.205s 00:07:36.027 sys 0m0.134s 00:07:36.027 11:14:33 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.027 11:14:33 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:36.027 ************************************ 00:07:36.027 END TEST accel_decomp_full 00:07:36.027 ************************************ 00:07:36.027 11:14:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.027 11:14:33 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:36.027 11:14:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.027 11:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.027 ************************************ 00:07:36.027 START TEST accel_decomp_mcore 00:07:36.027 ************************************ 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:36.027 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:36.287 [2024-06-10 11:14:33.265284] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:36.287 [2024-06-10 11:14:33.265347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358550 ] 00:07:36.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.287 [2024-06-10 11:14:33.352939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.287 [2024-06-10 11:14:33.429930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.287 [2024-06-10 11:14:33.430060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.287 [2024-06-10 11:14:33.430215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.287 [2024-06-10 11:14:33.430215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.287 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.288 11:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.745 00:07:37.745 real 0m1.329s 00:07:37.745 user 0m4.432s 00:07:37.745 sys 0m0.142s 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.745 11:14:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.745 ************************************ 00:07:37.745 END TEST accel_decomp_mcore 00:07:37.745 ************************************ 00:07:37.745 11:14:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.745 11:14:34 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:37.745 11:14:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.745 11:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.745 ************************************ 00:07:37.745 START TEST accel_decomp_full_mcore 00:07:37.745 ************************************ 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.745 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.745 [2024-06-10 11:14:34.670902] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:37.745 [2024-06-10 11:14:34.670996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358614 ] 00:07:37.745 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.745 [2024-06-10 11:14:34.755997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.745 [2024-06-10 11:14:34.836806] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.746 [2024-06-10 11:14:34.836922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.746 [2024-06-10 11:14:34.836968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.746 [2024-06-10 11:14:34.836968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.746 11:14:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.134 00:07:39.134 real 0m1.342s 00:07:39.134 user 0m4.478s 00:07:39.134 sys 0m0.138s 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:39.134 11:14:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 ************************************ 00:07:39.134 END TEST accel_decomp_full_mcore 00:07:39.134 ************************************ 00:07:39.134 11:14:36 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.134 11:14:36 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:39.134 11:14:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.134 11:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 ************************************ 00:07:39.134 START TEST accel_decomp_mthread 00:07:39.134 ************************************ 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.134 [2024-06-10 11:14:36.086837] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:39.134 [2024-06-10 11:14:36.086957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358917 ] 00:07:39.134 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.134 [2024-06-10 11:14:36.181522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.134 [2024-06-10 11:14:36.254888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.134 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.135 11:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.519 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.520 00:07:40.520 real 0m1.331s 00:07:40.520 user 0m1.205s 00:07:40.520 sys 0m0.137s 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:40.520 11:14:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:40.520 ************************************ 00:07:40.520 END TEST accel_decomp_mthread 00:07:40.520 ************************************ 00:07:40.520 11:14:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.520 11:14:37 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:40.520 11:14:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:40.520 11:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.520 ************************************ 00:07:40.520 START TEST accel_decomp_full_mthread 00:07:40.520 ************************************ 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:40.520 [2024-06-10 11:14:37.490084] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:40.520 [2024-06-10 11:14:37.490164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359232 ] 00:07:40.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.520 [2024-06-10 11:14:37.574188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.520 [2024-06-10 11:14:37.648277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.520 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.521 11:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.908 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.909 00:07:41.909 real 0m1.344s 00:07:41.909 user 0m1.225s 00:07:41.909 sys 0m0.130s 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.909 11:14:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.909 ************************************ 00:07:41.909 END TEST accel_decomp_full_mthread 00:07:41.909 ************************************ 00:07:41.909 11:14:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:41.909 11:14:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.909 11:14:38 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:41.909 11:14:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:41.909 11:14:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.909 11:14:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.909 11:14:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.909 11:14:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.909 11:14:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.909 11:14:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.909 11:14:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.909 11:14:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:41.909 11:14:38 accel -- accel/accel.sh@41 -- # jq -r . 00:07:41.909 ************************************ 00:07:41.909 START TEST accel_dif_functional_tests 00:07:41.909 ************************************ 00:07:41.909 11:14:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.909 [2024-06-10 11:14:38.929537] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:41.909 [2024-06-10 11:14:38.929583] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359518 ] 00:07:41.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.909 [2024-06-10 11:14:39.008567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.909 [2024-06-10 11:14:39.075575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.909 [2024-06-10 11:14:39.075702] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.909 [2024-06-10 11:14:39.075705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.909 00:07:41.909 00:07:41.909 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.909 http://cunit.sourceforge.net/ 00:07:41.909 00:07:41.909 00:07:41.909 Suite: accel_dif 00:07:41.909 Test: verify: DIF generated, GUARD check ...passed 00:07:41.909 Test: verify: DIF generated, APPTAG check ...passed 00:07:41.909 Test: verify: DIF generated, REFTAG check ...passed 00:07:41.909 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:14:39.128739] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.909 passed 00:07:41.909 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:14:39.128783] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.909 passed 00:07:41.909 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:14:39.128803] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.909 passed 00:07:41.909 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:41.909 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:14:39.128853] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:41.909 passed 00:07:41.909 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:41.909 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:41.909 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:41.909 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:14:39.128959] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:41.909 passed 00:07:41.909 Test: verify copy: DIF generated, GUARD check ...passed 00:07:41.909 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:41.909 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:41.909 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:14:39.129072] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.909 passed 00:07:41.909 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:14:39.129092] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.909 passed 00:07:41.909 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:14:39.129113] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.909 passed 00:07:41.909 Test: generate copy: DIF generated, GUARD check ...passed 00:07:41.909 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:41.909 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:41.909 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:41.909 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:41.909 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:41.909 Test: generate copy: iovecs-len validate ...[2024-06-10 11:14:39.129291] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:41.909 passed 00:07:41.909 Test: generate copy: buffer alignment validate ...passed 00:07:41.909 00:07:41.909 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.909 suites 1 1 n/a 0 0 00:07:41.909 tests 26 26 26 0 0 00:07:41.909 asserts 115 115 115 0 n/a 00:07:41.909 00:07:41.909 Elapsed time = 0.002 seconds 00:07:42.170 00:07:42.170 real 0m0.359s 00:07:42.171 user 0m0.480s 00:07:42.171 sys 0m0.139s 00:07:42.171 11:14:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.171 11:14:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:42.171 ************************************ 00:07:42.171 END TEST accel_dif_functional_tests 00:07:42.171 ************************************ 00:07:42.171 00:07:42.171 real 0m30.310s 00:07:42.171 user 0m33.491s 00:07:42.171 sys 0m4.436s 00:07:42.171 11:14:39 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.171 11:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.171 ************************************ 00:07:42.171 END TEST accel 00:07:42.171 ************************************ 00:07:42.171 11:14:39 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.171 11:14:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:42.171 11:14:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.171 11:14:39 -- common/autotest_common.sh@10 -- # set +x 00:07:42.171 ************************************ 00:07:42.171 START TEST accel_rpc 00:07:42.171 ************************************ 00:07:42.171 11:14:39 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.432 * Looking for test storage... 00:07:42.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:42.432 11:14:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.432 11:14:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1359626 00:07:42.432 11:14:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1359626 00:07:42.432 11:14:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1359626 ']' 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:42.432 11:14:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.432 [2024-06-10 11:14:39.511421] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:42.432 [2024-06-10 11:14:39.511485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359626 ] 00:07:42.432 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.432 [2024-06-10 11:14:39.596787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.692 [2024-06-10 11:14:39.664582] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.262 11:14:40 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:43.262 11:14:40 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:43.262 11:14:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.262 11:14:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.262 11:14:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.262 11:14:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.262 11:14:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.262 11:14:40 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:43.262 11:14:40 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:43.262 11:14:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.262 ************************************ 00:07:43.262 START TEST accel_assign_opcode 00:07:43.262 ************************************ 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.262 [2024-06-10 11:14:40.394665] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.262 [2024-06-10 11:14:40.406682] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.262 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:43.522 software 00:07:43.522 00:07:43.522 real 0m0.213s 00:07:43.522 user 0m0.052s 00:07:43.522 sys 0m0.010s 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:43.522 11:14:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.522 ************************************ 00:07:43.522 END TEST accel_assign_opcode 00:07:43.522 ************************************ 00:07:43.522 11:14:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1359626 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1359626 ']' 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1359626 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1359626 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1359626' 00:07:43.522 killing process with pid 1359626 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@968 -- # kill 1359626 00:07:43.522 11:14:40 accel_rpc -- common/autotest_common.sh@973 -- # wait 1359626 00:07:43.793 00:07:43.793 real 0m1.537s 00:07:43.793 user 0m1.675s 00:07:43.793 sys 0m0.420s 00:07:43.793 11:14:40 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:43.793 11:14:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.793 ************************************ 00:07:43.793 END TEST accel_rpc 00:07:43.793 ************************************ 00:07:43.793 11:14:40 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.793 11:14:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:43.793 11:14:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:43.793 11:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:43.793 ************************************ 00:07:43.793 START TEST app_cmdline 00:07:43.793 ************************************ 00:07:43.793 11:14:40 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.101 * Looking for test storage... 00:07:44.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.101 11:14:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.101 11:14:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1360007 00:07:44.101 11:14:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1360007 00:07:44.101 11:14:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1360007 ']' 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:44.101 11:14:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.101 [2024-06-10 11:14:41.121946] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:07:44.101 [2024-06-10 11:14:41.121996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360007 ] 00:07:44.101 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.101 [2024-06-10 11:14:41.204554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.101 [2024-06-10 11:14:41.267941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.048 11:14:41 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:45.048 11:14:41 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:45.048 11:14:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:45.048 { 00:07:45.048 "version": "SPDK v24.09-pre git sha1 3b7525570", 00:07:45.048 "fields": { 00:07:45.048 "major": 24, 00:07:45.048 "minor": 9, 00:07:45.048 "patch": 0, 00:07:45.048 "suffix": "-pre", 00:07:45.048 "commit": "3b7525570" 00:07:45.048 } 00:07:45.048 } 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:45.048 11:14:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.049 11:14:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.049 11:14:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.049 11:14:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.049 11:14:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.049 11:14:42 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.312 request: 00:07:45.312 { 00:07:45.312 "method": "env_dpdk_get_mem_stats", 00:07:45.312 "req_id": 1 00:07:45.312 } 00:07:45.312 Got JSON-RPC error response 00:07:45.312 response: 00:07:45.312 { 00:07:45.312 "code": -32601, 00:07:45.312 "message": "Method not found" 00:07:45.312 } 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:45.312 11:14:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1360007 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1360007 ']' 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1360007 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1360007 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1360007' 00:07:45.312 killing process with pid 1360007 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@968 -- # kill 1360007 00:07:45.312 11:14:42 app_cmdline -- common/autotest_common.sh@973 -- # wait 1360007 00:07:45.573 00:07:45.573 real 0m1.633s 00:07:45.573 user 0m2.026s 00:07:45.573 sys 0m0.412s 00:07:45.573 11:14:42 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.573 11:14:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.573 ************************************ 00:07:45.573 END TEST app_cmdline 00:07:45.573 ************************************ 00:07:45.573 11:14:42 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.573 11:14:42 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:45.573 11:14:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.573 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.573 ************************************ 00:07:45.573 START TEST version 00:07:45.573 ************************************ 00:07:45.573 11:14:42 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.573 * Looking for test storage... 00:07:45.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.573 11:14:42 version -- app/version.sh@17 -- # get_header_version major 00:07:45.573 11:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # cut -f2 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.573 11:14:42 version -- app/version.sh@17 -- # major=24 00:07:45.573 11:14:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # cut -f2 00:07:45.573 11:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.573 11:14:42 version -- app/version.sh@18 -- # minor=9 00:07:45.573 11:14:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:45.573 11:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # cut -f2 00:07:45.573 11:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.834 11:14:42 version -- app/version.sh@19 -- # patch=0 00:07:45.834 11:14:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:45.834 11:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.834 11:14:42 version -- app/version.sh@14 -- # cut -f2 00:07:45.834 11:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.834 11:14:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:45.834 11:14:42 version -- app/version.sh@22 -- # version=24.9 00:07:45.834 11:14:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:45.834 11:14:42 version -- app/version.sh@28 -- # version=24.9rc0 00:07:45.834 11:14:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.834 11:14:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.834 11:14:42 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:45.834 11:14:42 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:45.834 00:07:45.834 real 0m0.168s 00:07:45.834 user 0m0.080s 00:07:45.834 sys 0m0.125s 00:07:45.834 11:14:42 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.834 11:14:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:45.834 ************************************ 00:07:45.834 END TEST version 00:07:45.834 ************************************ 00:07:45.834 11:14:42 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@198 -- # uname -s 00:07:45.834 11:14:42 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:45.834 11:14:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:45.834 11:14:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:45.834 11:14:42 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.834 11:14:42 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:45.834 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.834 11:14:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:45.834 11:14:42 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:45.834 11:14:42 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:45.835 11:14:42 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.835 11:14:42 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:45.835 11:14:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.835 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.835 ************************************ 00:07:45.835 START TEST nvmf_tcp 00:07:45.835 ************************************ 00:07:45.835 11:14:42 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.096 * Looking for test storage... 00:07:46.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.096 11:14:43 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.096 11:14:43 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.096 11:14:43 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.096 11:14:43 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.096 11:14:43 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.096 11:14:43 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.096 11:14:43 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:46.096 11:14:43 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.096 11:14:43 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.097 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.097 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:46.097 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:46.097 11:14:43 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:46.097 11:14:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.097 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:46.097 11:14:43 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.097 11:14:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:46.097 11:14:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:46.097 11:14:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.097 ************************************ 00:07:46.097 START TEST nvmf_example 00:07:46.097 ************************************ 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.097 * Looking for test storage... 00:07:46.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.097 11:14:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:54.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:54.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.237 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:54.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:54.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.238 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.499 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.499 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.499 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:07:54.499 00:07:54.499 --- 10.0.0.2 ping statistics --- 00:07:54.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.499 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:07:54.499 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:54.499 00:07:54.499 --- 10.0.0.1 ping statistics --- 00:07:54.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.499 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1364370 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1364370 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1364370 ']' 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:54.500 11:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:54.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.439 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:55.440 11:14:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:55.440 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.663 Initializing NVMe Controllers 00:08:07.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.663 Initialization complete. Launching workers. 00:08:07.663 ======================================================== 00:08:07.663 Latency(us) 00:08:07.663 Device Information : IOPS MiB/s Average min max 00:08:07.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17242.81 67.35 3711.35 820.44 15502.80 00:08:07.663 ======================================================== 00:08:07.663 Total : 17242.81 67.35 3711.35 820.44 15502.80 00:08:07.663 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.663 rmmod nvme_tcp 00:08:07.663 rmmod nvme_fabrics 00:08:07.663 rmmod nvme_keyring 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1364370 ']' 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1364370 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1364370 ']' 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1364370 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1364370 00:08:07.663 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1364370' 00:08:07.664 killing process with pid 1364370 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 1364370 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 1364370 00:08:07.664 nvmf threads initialize successfully 00:08:07.664 bdev subsystem init successfully 00:08:07.664 created a nvmf target service 00:08:07.664 create targets's poll groups done 00:08:07.664 all subsystems of target started 00:08:07.664 nvmf target is running 00:08:07.664 all subsystems of target stopped 00:08:07.664 destroy targets's poll groups done 00:08:07.664 destroyed the nvmf target service 00:08:07.664 bdev subsystem finish successfully 00:08:07.664 nvmf threads destroy successfully 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.664 11:15:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.924 00:08:07.924 real 0m21.932s 00:08:07.924 user 0m46.640s 00:08:07.924 sys 0m7.268s 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.924 11:15:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.924 ************************************ 00:08:07.924 END TEST nvmf_example 00:08:07.924 ************************************ 00:08:07.924 11:15:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:07.924 11:15:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:07.924 11:15:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.924 11:15:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.188 ************************************ 00:08:08.188 START TEST nvmf_filesystem 00:08:08.188 ************************************ 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:08.188 * Looking for test storage... 00:08:08.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:08.188 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:08.189 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:08.189 #define SPDK_CONFIG_H 00:08:08.189 #define SPDK_CONFIG_APPS 1 00:08:08.189 #define SPDK_CONFIG_ARCH native 00:08:08.189 #undef SPDK_CONFIG_ASAN 00:08:08.189 #undef SPDK_CONFIG_AVAHI 00:08:08.189 #undef SPDK_CONFIG_CET 00:08:08.189 #define SPDK_CONFIG_COVERAGE 1 00:08:08.189 #define SPDK_CONFIG_CROSS_PREFIX 00:08:08.189 #undef SPDK_CONFIG_CRYPTO 00:08:08.189 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:08.189 #undef SPDK_CONFIG_CUSTOMOCF 00:08:08.189 #undef SPDK_CONFIG_DAOS 00:08:08.189 #define SPDK_CONFIG_DAOS_DIR 00:08:08.189 #define SPDK_CONFIG_DEBUG 1 00:08:08.189 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:08.189 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:08.189 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:08.189 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:08.189 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:08.190 #undef SPDK_CONFIG_DPDK_UADK 00:08:08.190 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:08.190 #define SPDK_CONFIG_EXAMPLES 1 00:08:08.190 #undef SPDK_CONFIG_FC 00:08:08.190 #define SPDK_CONFIG_FC_PATH 00:08:08.190 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:08.190 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:08.190 #undef SPDK_CONFIG_FUSE 00:08:08.190 #undef SPDK_CONFIG_FUZZER 00:08:08.190 #define SPDK_CONFIG_FUZZER_LIB 00:08:08.190 #undef SPDK_CONFIG_GOLANG 00:08:08.190 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:08.190 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:08.190 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:08.190 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:08.190 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:08.190 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:08.190 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:08.190 #define SPDK_CONFIG_IDXD 1 00:08:08.190 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:08.190 #undef SPDK_CONFIG_IPSEC_MB 00:08:08.190 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:08.190 #define SPDK_CONFIG_ISAL 1 00:08:08.190 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:08.190 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:08.190 #define SPDK_CONFIG_LIBDIR 00:08:08.190 #undef SPDK_CONFIG_LTO 00:08:08.190 #define SPDK_CONFIG_MAX_LCORES 00:08:08.190 #define SPDK_CONFIG_NVME_CUSE 1 00:08:08.190 #undef SPDK_CONFIG_OCF 00:08:08.190 #define SPDK_CONFIG_OCF_PATH 00:08:08.190 #define SPDK_CONFIG_OPENSSL_PATH 00:08:08.190 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:08.190 #define SPDK_CONFIG_PGO_DIR 00:08:08.190 #undef SPDK_CONFIG_PGO_USE 00:08:08.190 #define SPDK_CONFIG_PREFIX /usr/local 00:08:08.190 #undef SPDK_CONFIG_RAID5F 00:08:08.190 #undef SPDK_CONFIG_RBD 00:08:08.190 #define SPDK_CONFIG_RDMA 1 00:08:08.190 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:08.190 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:08.190 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:08.190 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:08.190 #define SPDK_CONFIG_SHARED 1 00:08:08.190 #undef SPDK_CONFIG_SMA 00:08:08.190 #define SPDK_CONFIG_TESTS 1 00:08:08.190 #undef SPDK_CONFIG_TSAN 00:08:08.190 #define SPDK_CONFIG_UBLK 1 00:08:08.190 #define SPDK_CONFIG_UBSAN 1 00:08:08.190 #undef SPDK_CONFIG_UNIT_TESTS 00:08:08.190 #undef SPDK_CONFIG_URING 00:08:08.190 #define SPDK_CONFIG_URING_PATH 00:08:08.190 #undef SPDK_CONFIG_URING_ZNS 00:08:08.190 #undef SPDK_CONFIG_USDT 00:08:08.190 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:08.190 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:08.190 #define SPDK_CONFIG_VFIO_USER 1 00:08:08.190 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:08.190 #define SPDK_CONFIG_VHOST 1 00:08:08.190 #define SPDK_CONFIG_VIRTIO 1 00:08:08.190 #undef SPDK_CONFIG_VTUNE 00:08:08.190 #define SPDK_CONFIG_VTUNE_DIR 00:08:08.190 #define SPDK_CONFIG_WERROR 1 00:08:08.190 #define SPDK_CONFIG_WPDK_DIR 00:08:08.190 #undef SPDK_CONFIG_XNVME 00:08:08.190 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:08.190 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:08.191 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j128 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1366974 ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1366974 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.FxiYi1 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FxiYi1/tests/target /tmp/spdk.FxiYi1 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:08.192 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=957218816 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327211008 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118693085184 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129376284672 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10683199488 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683429888 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64688140288 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25865334784 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25875259392 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9924608 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=339968 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=163840 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64687366144 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64688144384 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=778240 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937621504 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937625600 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:08.193 * Looking for test storage... 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:08.193 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118693085184 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12897792000 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:08.454 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.455 11:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:16.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:16.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.663 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:16.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:16.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.664 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:08:16.925 00:08:16.925 --- 10.0.0.2 ping statistics --- 00:08:16.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.925 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:08:16.925 00:08:16.925 --- 10.0.0.1 ping statistics --- 00:08:16.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.925 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.925 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.925 ************************************ 00:08:16.925 START TEST nvmf_filesystem_no_in_capsule 00:08:16.925 ************************************ 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1371553 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1371553 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1371553 ']' 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:16.925 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.925 [2024-06-10 11:15:14.112777] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:08:16.925 [2024-06-10 11:15:14.112849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.185 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.185 [2024-06-10 11:15:14.208787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.185 [2024-06-10 11:15:14.305096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.185 [2024-06-10 11:15:14.305160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.185 [2024-06-10 11:15:14.305168] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.185 [2024-06-10 11:15:14.305175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.185 [2024-06-10 11:15:14.305180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.185 [2024-06-10 11:15:14.305316] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.185 [2024-06-10 11:15:14.305454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.185 [2024-06-10 11:15:14.305615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.185 [2024-06-10 11:15:14.305615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.755 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:17.755 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:08:17.755 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.755 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:17.755 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.014 [2024-06-10 11:15:15.015489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.014 Malloc1 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.014 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.015 [2024-06-10 11:15:15.140900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:08:18.015 { 00:08:18.015 "name": "Malloc1", 00:08:18.015 "aliases": [ 00:08:18.015 "dfa27fe3-a988-401d-b06a-3168c386fe76" 00:08:18.015 ], 00:08:18.015 "product_name": "Malloc disk", 00:08:18.015 "block_size": 512, 00:08:18.015 "num_blocks": 1048576, 00:08:18.015 "uuid": "dfa27fe3-a988-401d-b06a-3168c386fe76", 00:08:18.015 "assigned_rate_limits": { 00:08:18.015 "rw_ios_per_sec": 0, 00:08:18.015 "rw_mbytes_per_sec": 0, 00:08:18.015 "r_mbytes_per_sec": 0, 00:08:18.015 "w_mbytes_per_sec": 0 00:08:18.015 }, 00:08:18.015 "claimed": true, 00:08:18.015 "claim_type": "exclusive_write", 00:08:18.015 "zoned": false, 00:08:18.015 "supported_io_types": { 00:08:18.015 "read": true, 00:08:18.015 "write": true, 00:08:18.015 "unmap": true, 00:08:18.015 "write_zeroes": true, 00:08:18.015 "flush": true, 00:08:18.015 "reset": true, 00:08:18.015 "compare": false, 00:08:18.015 "compare_and_write": false, 00:08:18.015 "abort": true, 00:08:18.015 "nvme_admin": false, 00:08:18.015 "nvme_io": false 00:08:18.015 }, 00:08:18.015 "memory_domains": [ 00:08:18.015 { 00:08:18.015 "dma_device_id": "system", 00:08:18.015 "dma_device_type": 1 00:08:18.015 }, 00:08:18.015 { 00:08:18.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.015 "dma_device_type": 2 00:08:18.015 } 00:08:18.015 ], 00:08:18.015 "driver_specific": {} 00:08:18.015 } 00:08:18.015 ]' 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:08:18.015 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:08:18.275 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:08:18.275 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:08:18.275 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:08:18.275 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.275 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.660 11:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.660 11:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:08:19.660 11:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.660 11:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:19.660 11:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.570 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.830 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:22.770 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:23.709 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:23.709 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.710 ************************************ 00:08:23.710 START TEST filesystem_ext4 00:08:23.710 ************************************ 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:08:23.710 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:23.710 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.710 Discarding device blocks: 0/522240 done 00:08:23.710 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.710 Filesystem UUID: 6183a892-aca4-40c9-9add-7bd947874164 00:08:23.710 Superblock backups stored on blocks: 00:08:23.710 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.710 00:08:23.710 Allocating group tables: 0/64 done 00:08:23.710 Writing inode tables: 0/64 done 00:08:24.003 Creating journal (8192 blocks): done 00:08:24.966 Writing superblocks and filesystem accounting information: 0/64 done 00:08:24.966 00:08:24.966 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:08:24.966 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.226 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1371553 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.487 00:08:25.487 real 0m1.769s 00:08:25.487 user 0m0.030s 00:08:25.487 sys 0m0.045s 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:25.487 ************************************ 00:08:25.487 END TEST filesystem_ext4 00:08:25.487 ************************************ 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.487 ************************************ 00:08:25.487 START TEST filesystem_btrfs 00:08:25.487 ************************************ 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.487 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.488 btrfs-progs v6.6.2 00:08:25.488 See https://btrfs.readthedocs.io for more information. 00:08:25.488 00:08:25.488 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.488 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.488 this does not affect your deployments: 00:08:25.488 - DUP for metadata (-m dup) 00:08:25.488 - enabled no-holes (-O no-holes) 00:08:25.488 - enabled free-space-tree (-R free-space-tree) 00:08:25.488 00:08:25.488 Label: (null) 00:08:25.488 UUID: e7a083da-5176-4b9c-b739-e6d0b3c95309 00:08:25.488 Node size: 16384 00:08:25.488 Sector size: 4096 00:08:25.488 Filesystem size: 510.00MiB 00:08:25.488 Block group profiles: 00:08:25.488 Data: single 8.00MiB 00:08:25.488 Metadata: DUP 32.00MiB 00:08:25.488 System: DUP 8.00MiB 00:08:25.488 SSD detected: yes 00:08:25.488 Zoned device: no 00:08:25.488 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.488 Runtime features: free-space-tree 00:08:25.488 Checksum: crc32c 00:08:25.488 Number of devices: 1 00:08:25.488 Devices: 00:08:25.488 ID SIZE PATH 00:08:25.488 1 510.00MiB /dev/nvme0n1p1 00:08:25.488 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:25.488 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1371553 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.872 00:08:26.872 real 0m1.321s 00:08:26.872 user 0m0.023s 00:08:26.872 sys 0m0.066s 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:26.872 ************************************ 00:08:26.872 END TEST filesystem_btrfs 00:08:26.872 ************************************ 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.872 ************************************ 00:08:26.872 START TEST filesystem_xfs 00:08:26.872 ************************************ 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:26.872 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:26.872 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:26.872 = sectsz=512 attr=2, projid32bit=1 00:08:26.872 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:26.872 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:26.872 data = bsize=4096 blocks=130560, imaxpct=25 00:08:26.872 = sunit=0 swidth=0 blks 00:08:26.872 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:26.872 log =internal log bsize=4096 blocks=16384, version=2 00:08:26.872 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:26.872 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.812 Discarding blocks...Done. 00:08:27.812 11:15:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:27.813 11:15:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1371553 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.762 00:08:29.762 real 0m2.931s 00:08:29.762 user 0m0.035s 00:08:29.762 sys 0m0.044s 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.762 ************************************ 00:08:29.762 END TEST filesystem_xfs 00:08:29.762 ************************************ 00:08:29.762 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1371553 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1371553 ']' 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1371553 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1371553 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1371553' 00:08:30.023 killing process with pid 1371553 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1371553 00:08:30.023 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1371553 00:08:30.284 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:30.285 00:08:30.285 real 0m13.409s 00:08:30.285 user 0m52.723s 00:08:30.285 sys 0m1.110s 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.285 ************************************ 00:08:30.285 END TEST nvmf_filesystem_no_in_capsule 00:08:30.285 ************************************ 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:30.285 11:15:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.546 ************************************ 00:08:30.546 START TEST nvmf_filesystem_in_capsule 00:08:30.546 ************************************ 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1373946 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1373946 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1373946 ']' 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:30.546 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.546 [2024-06-10 11:15:27.585859] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:08:30.546 [2024-06-10 11:15:27.585906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.546 [2024-06-10 11:15:27.675609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.546 [2024-06-10 11:15:27.742584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.546 [2024-06-10 11:15:27.742619] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.546 [2024-06-10 11:15:27.742625] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.546 [2024-06-10 11:15:27.742631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.546 [2024-06-10 11:15:27.742636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.546 [2024-06-10 11:15:27.742740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.546 [2024-06-10 11:15:27.742841] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.546 [2024-06-10 11:15:27.743010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.546 [2024-06-10 11:15:27.743012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 [2024-06-10 11:15:28.454454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 [2024-06-10 11:15:28.583914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:31.489 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:08:31.489 { 00:08:31.489 "name": "Malloc1", 00:08:31.489 "aliases": [ 00:08:31.489 "8779af8a-eda2-4e88-ab9a-91daa178a9e2" 00:08:31.489 ], 00:08:31.489 "product_name": "Malloc disk", 00:08:31.489 "block_size": 512, 00:08:31.489 "num_blocks": 1048576, 00:08:31.489 "uuid": "8779af8a-eda2-4e88-ab9a-91daa178a9e2", 00:08:31.489 "assigned_rate_limits": { 00:08:31.489 "rw_ios_per_sec": 0, 00:08:31.489 "rw_mbytes_per_sec": 0, 00:08:31.489 "r_mbytes_per_sec": 0, 00:08:31.489 "w_mbytes_per_sec": 0 00:08:31.489 }, 00:08:31.489 "claimed": true, 00:08:31.489 "claim_type": "exclusive_write", 00:08:31.489 "zoned": false, 00:08:31.489 "supported_io_types": { 00:08:31.489 "read": true, 00:08:31.489 "write": true, 00:08:31.489 "unmap": true, 00:08:31.489 "write_zeroes": true, 00:08:31.489 "flush": true, 00:08:31.489 "reset": true, 00:08:31.489 "compare": false, 00:08:31.489 "compare_and_write": false, 00:08:31.489 "abort": true, 00:08:31.490 "nvme_admin": false, 00:08:31.490 "nvme_io": false 00:08:31.490 }, 00:08:31.490 "memory_domains": [ 00:08:31.490 { 00:08:31.490 "dma_device_id": "system", 00:08:31.490 "dma_device_type": 1 00:08:31.490 }, 00:08:31.490 { 00:08:31.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.490 "dma_device_type": 2 00:08:31.490 } 00:08:31.490 ], 00:08:31.490 "driver_specific": {} 00:08:31.490 } 00:08:31.490 ]' 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:31.490 11:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:33.403 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.403 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:08:33.403 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.403 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:33.403 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:08:35.316 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:35.316 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:35.316 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.316 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:35.316 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:35.317 11:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:36.700 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:36.700 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.701 ************************************ 00:08:36.701 START TEST filesystem_in_capsule_ext4 00:08:36.701 ************************************ 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:08:36.701 11:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:36.701 mke2fs 1.46.5 (30-Dec-2021) 00:08:36.701 Discarding device blocks: 0/522240 done 00:08:36.701 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:36.701 Filesystem UUID: d9654f36-694e-4714-8734-e4ae5df041af 00:08:36.701 Superblock backups stored on blocks: 00:08:36.701 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:36.701 00:08:36.701 Allocating group tables: 0/64 done 00:08:36.701 Writing inode tables: 0/64 done 00:08:36.701 Creating journal (8192 blocks): done 00:08:37.900 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:08:37.900 00:08:37.900 11:15:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:08:37.900 11:15:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.900 11:15:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1373946 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.900 00:08:37.900 real 0m1.531s 00:08:37.900 user 0m0.032s 00:08:37.900 sys 0m0.041s 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:37.900 ************************************ 00:08:37.900 END TEST filesystem_in_capsule_ext4 00:08:37.900 ************************************ 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:37.900 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.161 ************************************ 00:08:38.161 START TEST filesystem_in_capsule_btrfs 00:08:38.161 ************************************ 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:38.161 btrfs-progs v6.6.2 00:08:38.161 See https://btrfs.readthedocs.io for more information. 00:08:38.161 00:08:38.161 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:38.161 NOTE: several default settings have changed in version 5.15, please make sure 00:08:38.161 this does not affect your deployments: 00:08:38.161 - DUP for metadata (-m dup) 00:08:38.161 - enabled no-holes (-O no-holes) 00:08:38.161 - enabled free-space-tree (-R free-space-tree) 00:08:38.161 00:08:38.161 Label: (null) 00:08:38.161 UUID: 9e56cadc-5440-47a9-b2ad-6e4a3a39e5ff 00:08:38.161 Node size: 16384 00:08:38.161 Sector size: 4096 00:08:38.161 Filesystem size: 510.00MiB 00:08:38.161 Block group profiles: 00:08:38.161 Data: single 8.00MiB 00:08:38.161 Metadata: DUP 32.00MiB 00:08:38.161 System: DUP 8.00MiB 00:08:38.161 SSD detected: yes 00:08:38.161 Zoned device: no 00:08:38.161 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:38.161 Runtime features: free-space-tree 00:08:38.161 Checksum: crc32c 00:08:38.161 Number of devices: 1 00:08:38.161 Devices: 00:08:38.161 ID SIZE PATH 00:08:38.161 1 510.00MiB /dev/nvme0n1p1 00:08:38.161 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:38.161 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:38.421 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1373946 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.682 00:08:38.682 real 0m0.511s 00:08:38.682 user 0m0.018s 00:08:38.682 sys 0m0.065s 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:38.682 ************************************ 00:08:38.682 END TEST filesystem_in_capsule_btrfs 00:08:38.682 ************************************ 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.682 ************************************ 00:08:38.682 START TEST filesystem_in_capsule_xfs 00:08:38.682 ************************************ 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:38.682 11:15:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:38.682 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:38.682 = sectsz=512 attr=2, projid32bit=1 00:08:38.682 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:38.682 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:38.682 data = bsize=4096 blocks=130560, imaxpct=25 00:08:38.682 = sunit=0 swidth=0 blks 00:08:38.682 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:38.682 log =internal log bsize=4096 blocks=16384, version=2 00:08:38.682 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:38.682 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:39.626 Discarding blocks...Done. 00:08:39.626 11:15:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:39.626 11:15:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.540 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.540 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1373946 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.801 00:08:41.801 real 0m3.069s 00:08:41.801 user 0m0.027s 00:08:41.801 sys 0m0.053s 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 ************************************ 00:08:41.801 END TEST filesystem_in_capsule_xfs 00:08:41.801 ************************************ 00:08:41.801 11:15:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:42.062 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:42.062 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.062 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1373946 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1373946 ']' 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1373946 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1373946 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1373946' 00:08:42.063 killing process with pid 1373946 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1373946 00:08:42.063 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1373946 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.324 00:08:42.324 real 0m11.946s 00:08:42.324 user 0m47.034s 00:08:42.324 sys 0m1.060s 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.324 ************************************ 00:08:42.324 END TEST nvmf_filesystem_in_capsule 00:08:42.324 ************************************ 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.324 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.324 rmmod nvme_tcp 00:08:42.324 rmmod nvme_fabrics 00:08:42.586 rmmod nvme_keyring 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.586 11:15:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.500 11:15:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.500 00:08:44.500 real 0m36.501s 00:08:44.500 user 1m42.279s 00:08:44.500 sys 0m8.713s 00:08:44.500 11:15:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:44.500 11:15:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.500 ************************************ 00:08:44.500 END TEST nvmf_filesystem 00:08:44.500 ************************************ 00:08:44.500 11:15:41 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.500 11:15:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:44.500 11:15:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:44.500 11:15:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.761 ************************************ 00:08:44.761 START TEST nvmf_target_discovery 00:08:44.761 ************************************ 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.761 * Looking for test storage... 00:08:44.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:44.761 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.762 11:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.022 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:53.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:53.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:53.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:53.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.023 11:15:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.023 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:53.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:08:53.289 00:08:53.289 --- 10.0.0.2 ping statistics --- 00:08:53.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.289 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:08:53.289 00:08:53.289 --- 10.0.0.1 ping statistics --- 00:08:53.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.289 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.289 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1380513 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1380513 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1380513 ']' 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:53.290 11:15:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:53.290 [2024-06-10 11:15:50.353743] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:08:53.290 [2024-06-10 11:15:50.353807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.290 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.290 [2024-06-10 11:15:50.446073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.549 [2024-06-10 11:15:50.540634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.549 [2024-06-10 11:15:50.540699] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.549 [2024-06-10 11:15:50.540707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.549 [2024-06-10 11:15:50.540714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.549 [2024-06-10 11:15:50.540719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.549 [2024-06-10 11:15:50.540872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.549 [2024-06-10 11:15:50.540939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.549 [2024-06-10 11:15:50.541069] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.549 [2024-06-10 11:15:50.541069] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 [2024-06-10 11:15:51.264416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 Null1 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 [2024-06-10 11:15:51.305895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 Null2 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:54.118 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:54.119 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 Null3 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 Null4 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.379 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:08:54.379 00:08:54.379 Discovery Log Number of Records 6, Generation counter 6 00:08:54.379 =====Discovery Log Entry 0====== 00:08:54.379 trtype: tcp 00:08:54.379 adrfam: ipv4 00:08:54.379 subtype: current discovery subsystem 00:08:54.379 treq: not required 00:08:54.379 portid: 0 00:08:54.379 trsvcid: 4420 00:08:54.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:54.379 traddr: 10.0.0.2 00:08:54.379 eflags: explicit discovery connections, duplicate discovery information 00:08:54.379 sectype: none 00:08:54.379 =====Discovery Log Entry 1====== 00:08:54.379 trtype: tcp 00:08:54.379 adrfam: ipv4 00:08:54.379 subtype: nvme subsystem 00:08:54.379 treq: not required 00:08:54.379 portid: 0 00:08:54.379 trsvcid: 4420 00:08:54.379 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:54.379 traddr: 10.0.0.2 00:08:54.379 eflags: none 00:08:54.379 sectype: none 00:08:54.379 =====Discovery Log Entry 2====== 00:08:54.379 trtype: tcp 00:08:54.379 adrfam: ipv4 00:08:54.379 subtype: nvme subsystem 00:08:54.379 treq: not required 00:08:54.379 portid: 0 00:08:54.379 trsvcid: 4420 00:08:54.379 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:54.379 traddr: 10.0.0.2 00:08:54.379 eflags: none 00:08:54.379 sectype: none 00:08:54.380 =====Discovery Log Entry 3====== 00:08:54.380 trtype: tcp 00:08:54.380 adrfam: ipv4 00:08:54.380 subtype: nvme subsystem 00:08:54.380 treq: not required 00:08:54.380 portid: 0 00:08:54.380 trsvcid: 4420 00:08:54.380 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:54.380 traddr: 10.0.0.2 00:08:54.380 eflags: none 00:08:54.380 sectype: none 00:08:54.380 =====Discovery Log Entry 4====== 00:08:54.380 trtype: tcp 00:08:54.380 adrfam: ipv4 00:08:54.380 subtype: nvme subsystem 00:08:54.380 treq: not required 00:08:54.380 portid: 0 00:08:54.380 trsvcid: 4420 00:08:54.380 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:54.380 traddr: 10.0.0.2 00:08:54.380 eflags: none 00:08:54.380 sectype: none 00:08:54.380 =====Discovery Log Entry 5====== 00:08:54.380 trtype: tcp 00:08:54.380 adrfam: ipv4 00:08:54.380 subtype: discovery subsystem referral 00:08:54.380 treq: not required 00:08:54.380 portid: 0 00:08:54.380 trsvcid: 4430 00:08:54.380 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:54.380 traddr: 10.0.0.2 00:08:54.380 eflags: none 00:08:54.380 sectype: none 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:54.380 Perform nvmf subsystem discovery via RPC 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.380 [ 00:08:54.380 { 00:08:54.380 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:54.380 "subtype": "Discovery", 00:08:54.380 "listen_addresses": [ 00:08:54.380 { 00:08:54.380 "trtype": "TCP", 00:08:54.380 "adrfam": "IPv4", 00:08:54.380 "traddr": "10.0.0.2", 00:08:54.380 "trsvcid": "4420" 00:08:54.380 } 00:08:54.380 ], 00:08:54.380 "allow_any_host": true, 00:08:54.380 "hosts": [] 00:08:54.380 }, 00:08:54.380 { 00:08:54.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.380 "subtype": "NVMe", 00:08:54.380 "listen_addresses": [ 00:08:54.380 { 00:08:54.380 "trtype": "TCP", 00:08:54.380 "adrfam": "IPv4", 00:08:54.380 "traddr": "10.0.0.2", 00:08:54.380 "trsvcid": "4420" 00:08:54.380 } 00:08:54.380 ], 00:08:54.380 "allow_any_host": true, 00:08:54.380 "hosts": [], 00:08:54.380 "serial_number": "SPDK00000000000001", 00:08:54.380 "model_number": "SPDK bdev Controller", 00:08:54.380 "max_namespaces": 32, 00:08:54.380 "min_cntlid": 1, 00:08:54.380 "max_cntlid": 65519, 00:08:54.380 "namespaces": [ 00:08:54.380 { 00:08:54.380 "nsid": 1, 00:08:54.380 "bdev_name": "Null1", 00:08:54.380 "name": "Null1", 00:08:54.380 "nguid": "D0A7494897DF47C39C4978C8938A6C65", 00:08:54.380 "uuid": "d0a74948-97df-47c3-9c49-78c8938a6c65" 00:08:54.380 } 00:08:54.380 ] 00:08:54.380 }, 00:08:54.380 { 00:08:54.380 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:54.380 "subtype": "NVMe", 00:08:54.380 "listen_addresses": [ 00:08:54.380 { 00:08:54.380 "trtype": "TCP", 00:08:54.380 "adrfam": "IPv4", 00:08:54.380 "traddr": "10.0.0.2", 00:08:54.380 "trsvcid": "4420" 00:08:54.380 } 00:08:54.380 ], 00:08:54.380 "allow_any_host": true, 00:08:54.380 "hosts": [], 00:08:54.380 "serial_number": "SPDK00000000000002", 00:08:54.380 "model_number": "SPDK bdev Controller", 00:08:54.380 "max_namespaces": 32, 00:08:54.380 "min_cntlid": 1, 00:08:54.380 "max_cntlid": 65519, 00:08:54.380 "namespaces": [ 00:08:54.380 { 00:08:54.380 "nsid": 1, 00:08:54.380 "bdev_name": "Null2", 00:08:54.380 "name": "Null2", 00:08:54.380 "nguid": "743CD567B3C54C16AD85EFBD5F0E8D4C", 00:08:54.380 "uuid": "743cd567-b3c5-4c16-ad85-efbd5f0e8d4c" 00:08:54.380 } 00:08:54.380 ] 00:08:54.380 }, 00:08:54.380 { 00:08:54.380 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:54.380 "subtype": "NVMe", 00:08:54.380 "listen_addresses": [ 00:08:54.380 { 00:08:54.380 "trtype": "TCP", 00:08:54.380 "adrfam": "IPv4", 00:08:54.380 "traddr": "10.0.0.2", 00:08:54.380 "trsvcid": "4420" 00:08:54.380 } 00:08:54.380 ], 00:08:54.380 "allow_any_host": true, 00:08:54.380 "hosts": [], 00:08:54.380 "serial_number": "SPDK00000000000003", 00:08:54.380 "model_number": "SPDK bdev Controller", 00:08:54.380 "max_namespaces": 32, 00:08:54.380 "min_cntlid": 1, 00:08:54.380 "max_cntlid": 65519, 00:08:54.380 "namespaces": [ 00:08:54.380 { 00:08:54.380 "nsid": 1, 00:08:54.380 "bdev_name": "Null3", 00:08:54.380 "name": "Null3", 00:08:54.380 "nguid": "2E7840294828441DA107345E17620780", 00:08:54.380 "uuid": "2e784029-4828-441d-a107-345e17620780" 00:08:54.380 } 00:08:54.380 ] 00:08:54.380 }, 00:08:54.380 { 00:08:54.380 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:54.380 "subtype": "NVMe", 00:08:54.380 "listen_addresses": [ 00:08:54.380 { 00:08:54.380 "trtype": "TCP", 00:08:54.380 "adrfam": "IPv4", 00:08:54.380 "traddr": "10.0.0.2", 00:08:54.380 "trsvcid": "4420" 00:08:54.380 } 00:08:54.380 ], 00:08:54.380 "allow_any_host": true, 00:08:54.380 "hosts": [], 00:08:54.380 "serial_number": "SPDK00000000000004", 00:08:54.380 "model_number": "SPDK bdev Controller", 00:08:54.380 "max_namespaces": 32, 00:08:54.380 "min_cntlid": 1, 00:08:54.380 "max_cntlid": 65519, 00:08:54.380 "namespaces": [ 00:08:54.380 { 00:08:54.380 "nsid": 1, 00:08:54.380 "bdev_name": "Null4", 00:08:54.380 "name": "Null4", 00:08:54.380 "nguid": "04DCFD6A158440EC93128641D135C012", 00:08:54.380 "uuid": "04dcfd6a-1584-40ec-9312-8641d135c012" 00:08:54.380 } 00:08:54.380 ] 00:08:54.380 } 00:08:54.380 ] 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.380 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.641 rmmod nvme_tcp 00:08:54.641 rmmod nvme_fabrics 00:08:54.641 rmmod nvme_keyring 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1380513 ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1380513 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1380513 ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1380513 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1380513 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:54.641 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1380513' 00:08:54.641 killing process with pid 1380513 00:08:54.642 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1380513 00:08:54.642 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1380513 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.902 11:15:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.819 11:15:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.819 00:08:56.819 real 0m12.266s 00:08:56.819 user 0m8.138s 00:08:56.819 sys 0m6.647s 00:08:56.819 11:15:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:56.819 11:15:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.819 ************************************ 00:08:56.819 END TEST nvmf_target_discovery 00:08:56.819 ************************************ 00:08:57.080 11:15:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:57.080 11:15:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:57.080 11:15:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:57.080 11:15:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 ************************************ 00:08:57.080 START TEST nvmf_referrals 00:08:57.080 ************************************ 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:57.080 * Looking for test storage... 00:08:57.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.080 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.081 11:15:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.217 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:05.218 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:05.218 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:05.218 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:05.218 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.218 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:09:05.478 00:09:05.478 --- 10.0.0.2 ping statistics --- 00:09:05.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.478 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:05.478 00:09:05.478 --- 10.0.0.1 ping statistics --- 00:09:05.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.478 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.478 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1385328 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1385328 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1385328 ']' 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:05.479 11:16:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:05.479 [2024-06-10 11:16:02.625546] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:09:05.479 [2024-06-10 11:16:02.625606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.479 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.739 [2024-06-10 11:16:02.719223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.739 [2024-06-10 11:16:02.812566] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.739 [2024-06-10 11:16:02.812630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.739 [2024-06-10 11:16:02.812638] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.739 [2024-06-10 11:16:02.812644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.739 [2024-06-10 11:16:02.812650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.739 [2024-06-10 11:16:02.812792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.739 [2024-06-10 11:16:02.812927] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.739 [2024-06-10 11:16:02.813024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.739 [2024-06-10 11:16:02.813024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.308 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 [2024-06-10 11:16:03.533459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 [2024-06-10 11:16:03.546978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.569 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.830 11:16:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:06.830 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.090 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.351 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.612 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.872 11:16:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.872 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.872 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.872 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.133 rmmod nvme_tcp 00:09:08.133 rmmod nvme_fabrics 00:09:08.133 rmmod nvme_keyring 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1385328 ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1385328 ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1385328' 00:09:08.133 killing process with pid 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1385328 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.133 11:16:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.679 11:16:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.679 00:09:10.679 real 0m13.324s 00:09:10.679 user 0m13.386s 00:09:10.679 sys 0m6.844s 00:09:10.679 11:16:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:10.679 11:16:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 ************************************ 00:09:10.679 END TEST nvmf_referrals 00:09:10.679 ************************************ 00:09:10.679 11:16:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:10.679 11:16:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:10.679 11:16:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:10.679 11:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 ************************************ 00:09:10.679 START TEST nvmf_connect_disconnect 00:09:10.679 ************************************ 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:10.679 * Looking for test storage... 00:09:10.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.679 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.680 11:16:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.821 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:09:18.822 00:09:18.822 --- 10.0.0.2 ping statistics --- 00:09:18.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.822 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:18.822 00:09:18.822 --- 10.0.0.1 ping statistics --- 00:09:18.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.822 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1390238 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1390238 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1390238 ']' 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:18.822 11:16:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.822 [2024-06-10 11:16:16.019196] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:09:18.823 [2024-06-10 11:16:16.019259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.083 [2024-06-10 11:16:16.110549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.083 [2024-06-10 11:16:16.203609] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.083 [2024-06-10 11:16:16.203668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.083 [2024-06-10 11:16:16.203676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.083 [2024-06-10 11:16:16.203682] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.083 [2024-06-10 11:16:16.203688] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.083 [2024-06-10 11:16:16.203812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.083 [2024-06-10 11:16:16.203950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.083 [2024-06-10 11:16:16.204022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.083 [2024-06-10 11:16:16.204023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.653 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:19.653 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:09:19.653 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.653 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:19.653 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.913 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.913 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 [2024-06-10 11:16:16.921547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 [2024-06-10 11:16:16.977761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:19.914 11:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:24.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.293 rmmod nvme_tcp 00:09:38.293 rmmod nvme_fabrics 00:09:38.293 rmmod nvme_keyring 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1390238 ']' 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1390238 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1390238 ']' 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1390238 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1390238 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1390238' 00:09:38.293 killing process with pid 1390238 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1390238 00:09:38.293 11:16:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1390238 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.293 11:16:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.252 11:16:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.252 00:09:40.252 real 0m29.642s 00:09:40.252 user 1m17.639s 00:09:40.252 sys 0m7.260s 00:09:40.252 11:16:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:40.252 11:16:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:40.252 ************************************ 00:09:40.252 END TEST nvmf_connect_disconnect 00:09:40.252 ************************************ 00:09:40.252 11:16:37 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:40.252 11:16:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:40.252 11:16:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:40.252 11:16:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.252 ************************************ 00:09:40.252 START TEST nvmf_multitarget 00:09:40.252 ************************************ 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:40.252 * Looking for test storage... 00:09:40.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.252 11:16:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.399 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:48.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:48.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:48.400 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:48.400 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.400 11:16:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:48.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:09:48.400 00:09:48.400 --- 10.0.0.2 ping statistics --- 00:09:48.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.400 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:48.400 00:09:48.400 --- 10.0.0.1 ping statistics --- 00:09:48.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.400 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1397870 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1397870 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 1397870 ']' 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:48.400 11:16:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:48.400 [2024-06-10 11:16:45.309287] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:09:48.400 [2024-06-10 11:16:45.309354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.400 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.400 [2024-06-10 11:16:45.403935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.400 [2024-06-10 11:16:45.497890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.401 [2024-06-10 11:16:45.497953] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.401 [2024-06-10 11:16:45.497961] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.401 [2024-06-10 11:16:45.497967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.401 [2024-06-10 11:16:45.497973] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.401 [2024-06-10 11:16:45.498107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.401 [2024-06-10 11:16:45.498246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.401 [2024-06-10 11:16:45.498409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.401 [2024-06-10 11:16:45.498409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.971 11:16:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:48.971 11:16:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:09:48.971 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.971 11:16:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:48.971 11:16:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:49.232 "nvmf_tgt_1" 00:09:49.232 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:49.493 "nvmf_tgt_2" 00:09:49.493 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:49.493 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:49.493 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:49.493 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:49.753 true 00:09:49.753 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:49.753 true 00:09:49.753 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:49.753 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.013 11:16:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.013 rmmod nvme_tcp 00:09:50.013 rmmod nvme_fabrics 00:09:50.013 rmmod nvme_keyring 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1397870 ']' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 1397870 ']' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1397870' 00:09:50.013 killing process with pid 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 1397870 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.013 11:16:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.559 11:16:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.559 00:09:52.559 real 0m12.082s 00:09:52.559 user 0m10.036s 00:09:52.559 sys 0m6.319s 00:09:52.559 11:16:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:52.559 11:16:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:52.559 ************************************ 00:09:52.559 END TEST nvmf_multitarget 00:09:52.559 ************************************ 00:09:52.559 11:16:49 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:52.559 11:16:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:52.559 11:16:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:52.559 11:16:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.559 ************************************ 00:09:52.559 START TEST nvmf_rpc 00:09:52.559 ************************************ 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:52.559 * Looking for test storage... 00:09:52.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.559 11:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.717 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:10:00.718 00:10:00.718 --- 10.0.0.2 ping statistics --- 00:10:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.718 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:10:00.718 00:10:00.718 --- 10.0.0.1 ping statistics --- 00:10:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.718 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1402453 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1402453 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 1402453 ']' 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:00.718 11:16:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.718 [2024-06-10 11:16:57.835609] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:10:00.718 [2024-06-10 11:16:57.835662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.718 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.718 [2024-06-10 11:16:57.923089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.979 [2024-06-10 11:16:58.010252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.979 [2024-06-10 11:16:58.010313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.979 [2024-06-10 11:16:58.010320] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.979 [2024-06-10 11:16:58.010327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.979 [2024-06-10 11:16:58.010332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.979 [2024-06-10 11:16:58.010467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.979 [2024-06-10 11:16:58.010601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.979 [2024-06-10 11:16:58.010759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.979 [2024-06-10 11:16:58.010761] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:01.549 "tick_rate": 2600000000, 00:10:01.549 "poll_groups": [ 00:10:01.549 { 00:10:01.549 "name": "nvmf_tgt_poll_group_000", 00:10:01.549 "admin_qpairs": 0, 00:10:01.549 "io_qpairs": 0, 00:10:01.549 "current_admin_qpairs": 0, 00:10:01.549 "current_io_qpairs": 0, 00:10:01.549 "pending_bdev_io": 0, 00:10:01.549 "completed_nvme_io": 0, 00:10:01.549 "transports": [] 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "nvmf_tgt_poll_group_001", 00:10:01.549 "admin_qpairs": 0, 00:10:01.549 "io_qpairs": 0, 00:10:01.549 "current_admin_qpairs": 0, 00:10:01.549 "current_io_qpairs": 0, 00:10:01.549 "pending_bdev_io": 0, 00:10:01.549 "completed_nvme_io": 0, 00:10:01.549 "transports": [] 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "nvmf_tgt_poll_group_002", 00:10:01.549 "admin_qpairs": 0, 00:10:01.549 "io_qpairs": 0, 00:10:01.549 "current_admin_qpairs": 0, 00:10:01.549 "current_io_qpairs": 0, 00:10:01.549 "pending_bdev_io": 0, 00:10:01.549 "completed_nvme_io": 0, 00:10:01.549 "transports": [] 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "nvmf_tgt_poll_group_003", 00:10:01.549 "admin_qpairs": 0, 00:10:01.549 "io_qpairs": 0, 00:10:01.549 "current_admin_qpairs": 0, 00:10:01.549 "current_io_qpairs": 0, 00:10:01.549 "pending_bdev_io": 0, 00:10:01.549 "completed_nvme_io": 0, 00:10:01.549 "transports": [] 00:10:01.549 } 00:10:01.549 ] 00:10:01.549 }' 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:01.549 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.810 [2024-06-10 11:16:58.820762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:01.810 "tick_rate": 2600000000, 00:10:01.810 "poll_groups": [ 00:10:01.810 { 00:10:01.810 "name": "nvmf_tgt_poll_group_000", 00:10:01.810 "admin_qpairs": 0, 00:10:01.810 "io_qpairs": 0, 00:10:01.810 "current_admin_qpairs": 0, 00:10:01.810 "current_io_qpairs": 0, 00:10:01.810 "pending_bdev_io": 0, 00:10:01.810 "completed_nvme_io": 0, 00:10:01.810 "transports": [ 00:10:01.810 { 00:10:01.810 "trtype": "TCP" 00:10:01.810 } 00:10:01.810 ] 00:10:01.810 }, 00:10:01.810 { 00:10:01.810 "name": "nvmf_tgt_poll_group_001", 00:10:01.810 "admin_qpairs": 0, 00:10:01.810 "io_qpairs": 0, 00:10:01.810 "current_admin_qpairs": 0, 00:10:01.810 "current_io_qpairs": 0, 00:10:01.810 "pending_bdev_io": 0, 00:10:01.810 "completed_nvme_io": 0, 00:10:01.810 "transports": [ 00:10:01.810 { 00:10:01.810 "trtype": "TCP" 00:10:01.810 } 00:10:01.810 ] 00:10:01.810 }, 00:10:01.810 { 00:10:01.810 "name": "nvmf_tgt_poll_group_002", 00:10:01.810 "admin_qpairs": 0, 00:10:01.810 "io_qpairs": 0, 00:10:01.810 "current_admin_qpairs": 0, 00:10:01.810 "current_io_qpairs": 0, 00:10:01.810 "pending_bdev_io": 0, 00:10:01.810 "completed_nvme_io": 0, 00:10:01.810 "transports": [ 00:10:01.810 { 00:10:01.810 "trtype": "TCP" 00:10:01.810 } 00:10:01.810 ] 00:10:01.810 }, 00:10:01.810 { 00:10:01.810 "name": "nvmf_tgt_poll_group_003", 00:10:01.810 "admin_qpairs": 0, 00:10:01.810 "io_qpairs": 0, 00:10:01.810 "current_admin_qpairs": 0, 00:10:01.810 "current_io_qpairs": 0, 00:10:01.810 "pending_bdev_io": 0, 00:10:01.810 "completed_nvme_io": 0, 00:10:01.810 "transports": [ 00:10:01.810 { 00:10:01.810 "trtype": "TCP" 00:10:01.810 } 00:10:01.810 ] 00:10:01.810 } 00:10:01.810 ] 00:10:01.810 }' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.810 Malloc1 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.810 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:01.811 11:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.811 [2024-06-10 11:16:59.005365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:10:01.811 [2024-06-10 11:16:59.032189] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:10:01.811 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:01.811 could not add new controller: failed to write to nvme-fabrics device 00:10:01.811 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:02.072 11:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.456 11:17:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.456 11:17:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:03.456 11:17:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.457 11:17:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:03.457 11:17:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:05.397 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.658 [2024-06-10 11:17:02.656018] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:10:05.658 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:05.658 could not add new controller: failed to write to nvme-fabrics device 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:05.658 11:17:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.042 11:17:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.042 11:17:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:07.042 11:17:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.042 11:17:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:07.042 11:17:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.953 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:08.954 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 [2024-06-10 11:17:06.295056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:09.214 11:17:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.597 11:17:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.597 11:17:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:10.597 11:17:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.597 11:17:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:10.597 11:17:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 [2024-06-10 11:17:09.942561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:13.139 11:17:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.520 11:17:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.520 11:17:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:14.520 11:17:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.520 11:17:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:14.520 11:17:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:16.428 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:16.428 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 [2024-06-10 11:17:13.638580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.429 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.688 11:17:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.070 11:17:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.070 11:17:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:18.070 11:17:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.070 11:17:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:18.070 11:17:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:19.978 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 [2024-06-10 11:17:17.297571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.239 11:17:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.622 11:17:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.622 11:17:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:21.622 11:17:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.622 11:17:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:21.622 11:17:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 [2024-06-10 11:17:20.954077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.163 11:17:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.545 11:17:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.545 11:17:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:25.545 11:17:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.545 11:17:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:25.545 11:17:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.455 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 [2024-06-10 11:17:24.607408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.456 [2024-06-10 11:17:24.667549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.456 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 [2024-06-10 11:17:24.731735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.716 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 [2024-06-10 11:17:24.791945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 [2024-06-10 11:17:24.852139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:27.717 "tick_rate": 2600000000, 00:10:27.717 "poll_groups": [ 00:10:27.717 { 00:10:27.717 "name": "nvmf_tgt_poll_group_000", 00:10:27.717 "admin_qpairs": 0, 00:10:27.717 "io_qpairs": 224, 00:10:27.717 "current_admin_qpairs": 0, 00:10:27.717 "current_io_qpairs": 0, 00:10:27.717 "pending_bdev_io": 0, 00:10:27.717 "completed_nvme_io": 227, 00:10:27.717 "transports": [ 00:10:27.717 { 00:10:27.717 "trtype": "TCP" 00:10:27.717 } 00:10:27.717 ] 00:10:27.717 }, 00:10:27.717 { 00:10:27.717 "name": "nvmf_tgt_poll_group_001", 00:10:27.717 "admin_qpairs": 1, 00:10:27.717 "io_qpairs": 223, 00:10:27.717 "current_admin_qpairs": 0, 00:10:27.717 "current_io_qpairs": 0, 00:10:27.717 "pending_bdev_io": 0, 00:10:27.717 "completed_nvme_io": 255, 00:10:27.717 "transports": [ 00:10:27.717 { 00:10:27.717 "trtype": "TCP" 00:10:27.717 } 00:10:27.717 ] 00:10:27.717 }, 00:10:27.717 { 00:10:27.717 "name": "nvmf_tgt_poll_group_002", 00:10:27.717 "admin_qpairs": 6, 00:10:27.717 "io_qpairs": 218, 00:10:27.717 "current_admin_qpairs": 0, 00:10:27.717 "current_io_qpairs": 0, 00:10:27.717 "pending_bdev_io": 0, 00:10:27.717 "completed_nvme_io": 528, 00:10:27.717 "transports": [ 00:10:27.717 { 00:10:27.717 "trtype": "TCP" 00:10:27.717 } 00:10:27.717 ] 00:10:27.717 }, 00:10:27.717 { 00:10:27.717 "name": "nvmf_tgt_poll_group_003", 00:10:27.717 "admin_qpairs": 0, 00:10:27.717 "io_qpairs": 224, 00:10:27.717 "current_admin_qpairs": 0, 00:10:27.717 "current_io_qpairs": 0, 00:10:27.717 "pending_bdev_io": 0, 00:10:27.717 "completed_nvme_io": 229, 00:10:27.717 "transports": [ 00:10:27.717 { 00:10:27.717 "trtype": "TCP" 00:10:27.717 } 00:10:27.717 ] 00:10:27.717 } 00:10:27.717 ] 00:10:27.717 }' 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:27.717 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.977 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:27.977 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:27.977 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:27.978 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:27.978 11:17:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.978 rmmod nvme_tcp 00:10:27.978 rmmod nvme_fabrics 00:10:27.978 rmmod nvme_keyring 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1402453 ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1402453 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 1402453 ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 1402453 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1402453 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1402453' 00:10:27.978 killing process with pid 1402453 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 1402453 00:10:27.978 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 1402453 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.238 11:17:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.220 11:17:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.220 00:10:30.220 real 0m37.949s 00:10:30.220 user 1m51.263s 00:10:30.220 sys 0m7.764s 00:10:30.220 11:17:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:30.220 11:17:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.220 ************************************ 00:10:30.220 END TEST nvmf_rpc 00:10:30.220 ************************************ 00:10:30.220 11:17:27 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:30.220 11:17:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:30.220 11:17:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:30.220 11:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.220 ************************************ 00:10:30.220 START TEST nvmf_invalid 00:10:30.220 ************************************ 00:10:30.220 11:17:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:30.481 * Looking for test storage... 00:10:30.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:30.481 11:17:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.482 11:17:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:38.616 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:38.616 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:38.616 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.616 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:38.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:10:38.617 00:10:38.617 --- 10.0.0.2 ping statistics --- 00:10:38.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.617 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:38.617 00:10:38.617 --- 10.0.0.1 ping statistics --- 00:10:38.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.617 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1411677 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1411677 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 1411677 ']' 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:38.617 11:17:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 [2024-06-10 11:17:35.799607] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:10:38.617 [2024-06-10 11:17:35.799668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.617 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.877 [2024-06-10 11:17:35.895225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.877 [2024-06-10 11:17:35.989043] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.877 [2024-06-10 11:17:35.989104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.877 [2024-06-10 11:17:35.989112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.877 [2024-06-10 11:17:35.989119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.877 [2024-06-10 11:17:35.989124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.877 [2024-06-10 11:17:35.989257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.877 [2024-06-10 11:17:35.989394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.877 [2024-06-10 11:17:35.989561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.877 [2024-06-10 11:17:35.989562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.446 11:17:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:39.446 11:17:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:10:39.446 11:17:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.446 11:17:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:39.446 11:17:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13600 00:10:39.706 [2024-06-10 11:17:36.878009] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:39.706 { 00:10:39.706 "nqn": "nqn.2016-06.io.spdk:cnode13600", 00:10:39.706 "tgt_name": "foobar", 00:10:39.706 "method": "nvmf_create_subsystem", 00:10:39.706 "req_id": 1 00:10:39.706 } 00:10:39.706 Got JSON-RPC error response 00:10:39.706 response: 00:10:39.706 { 00:10:39.706 "code": -32603, 00:10:39.706 "message": "Unable to find target foobar" 00:10:39.706 }' 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:39.706 { 00:10:39.706 "nqn": "nqn.2016-06.io.spdk:cnode13600", 00:10:39.706 "tgt_name": "foobar", 00:10:39.706 "method": "nvmf_create_subsystem", 00:10:39.706 "req_id": 1 00:10:39.706 } 00:10:39.706 Got JSON-RPC error response 00:10:39.706 response: 00:10:39.706 { 00:10:39.706 "code": -32603, 00:10:39.706 "message": "Unable to find target foobar" 00:10:39.706 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:39.706 11:17:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30311 00:10:39.966 [2024-06-10 11:17:37.090748] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30311: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:39.966 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:39.966 { 00:10:39.966 "nqn": "nqn.2016-06.io.spdk:cnode30311", 00:10:39.966 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:39.966 "method": "nvmf_create_subsystem", 00:10:39.966 "req_id": 1 00:10:39.966 } 00:10:39.966 Got JSON-RPC error response 00:10:39.966 response: 00:10:39.966 { 00:10:39.966 "code": -32602, 00:10:39.966 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:39.966 }' 00:10:39.966 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:39.966 { 00:10:39.966 "nqn": "nqn.2016-06.io.spdk:cnode30311", 00:10:39.966 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:39.966 "method": "nvmf_create_subsystem", 00:10:39.966 "req_id": 1 00:10:39.966 } 00:10:39.966 Got JSON-RPC error response 00:10:39.966 response: 00:10:39.966 { 00:10:39.966 "code": -32602, 00:10:39.966 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:39.966 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:39.966 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:39.966 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12578 00:10:40.226 [2024-06-10 11:17:37.303432] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12578: invalid model number 'SPDK_Controller' 00:10:40.226 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:40.226 { 00:10:40.227 "nqn": "nqn.2016-06.io.spdk:cnode12578", 00:10:40.227 "model_number": "SPDK_Controller\u001f", 00:10:40.227 "method": "nvmf_create_subsystem", 00:10:40.227 "req_id": 1 00:10:40.227 } 00:10:40.227 Got JSON-RPC error response 00:10:40.227 response: 00:10:40.227 { 00:10:40.227 "code": -32602, 00:10:40.227 "message": "Invalid MN SPDK_Controller\u001f" 00:10:40.227 }' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:40.227 { 00:10:40.227 "nqn": "nqn.2016-06.io.spdk:cnode12578", 00:10:40.227 "model_number": "SPDK_Controller\u001f", 00:10:40.227 "method": "nvmf_create_subsystem", 00:10:40.227 "req_id": 1 00:10:40.227 } 00:10:40.227 Got JSON-RPC error response 00:10:40.227 response: 00:10:40.227 { 00:10:40.227 "code": -32602, 00:10:40.227 "message": "Invalid MN SPDK_Controller\u001f" 00:10:40.227 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:40.227 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:40.487 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z.h^\c~9"8MkBeR<>0vXJ' 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'z.h^\c~9"8MkBeR<>0vXJ' nqn.2016-06.io.spdk:cnode23397 00:10:40.488 [2024-06-10 11:17:37.672593] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23397: invalid serial number 'z.h^\c~9"8MkBeR<>0vXJ' 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:40.488 { 00:10:40.488 "nqn": "nqn.2016-06.io.spdk:cnode23397", 00:10:40.488 "serial_number": "z.h^\\c~9\"8MkBeR<>0vXJ", 00:10:40.488 "method": "nvmf_create_subsystem", 00:10:40.488 "req_id": 1 00:10:40.488 } 00:10:40.488 Got JSON-RPC error response 00:10:40.488 response: 00:10:40.488 { 00:10:40.488 "code": -32602, 00:10:40.488 "message": "Invalid SN z.h^\\c~9\"8MkBeR<>0vXJ" 00:10:40.488 }' 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:40.488 { 00:10:40.488 "nqn": "nqn.2016-06.io.spdk:cnode23397", 00:10:40.488 "serial_number": "z.h^\\c~9\"8MkBeR<>0vXJ", 00:10:40.488 "method": "nvmf_create_subsystem", 00:10:40.488 "req_id": 1 00:10:40.488 } 00:10:40.488 Got JSON-RPC error response 00:10:40.488 response: 00:10:40.488 { 00:10:40.488 "code": -32602, 00:10:40.488 "message": "Invalid SN z.h^\\c~9\"8MkBeR<>0vXJ" 00:10:40.488 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:40.488 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:40.748 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:40.749 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:40.750 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.750 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@29 -- # string='\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e"^(' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e"^(' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e"^(' nqn.2016-06.io.spdk:cnode21697 00:10:41.010 [2024-06-10 11:17:38.186219] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21697: invalid model number '\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e"^(' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:41.010 { 00:10:41.010 "nqn": "nqn.2016-06.io.spdk:cnode21697", 00:10:41.010 "model_number": "\\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e\"^(", 00:10:41.010 "method": "nvmf_create_subsystem", 00:10:41.010 "req_id": 1 00:10:41.010 } 00:10:41.010 Got JSON-RPC error response 00:10:41.010 response: 00:10:41.010 { 00:10:41.010 "code": -32602, 00:10:41.010 "message": "Invalid MN \\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e\"^(" 00:10:41.010 }' 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:41.010 { 00:10:41.010 "nqn": "nqn.2016-06.io.spdk:cnode21697", 00:10:41.010 "model_number": "\\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e\"^(", 00:10:41.010 "method": "nvmf_create_subsystem", 00:10:41.010 "req_id": 1 00:10:41.010 } 00:10:41.010 Got JSON-RPC error response 00:10:41.010 response: 00:10:41.010 { 00:10:41.010 "code": -32602, 00:10:41.010 "message": "Invalid MN \\-#f^ o0pTX}^7nM7|8,lkVpWPvh#3.*oTMa5:e\"^(" 00:10:41.010 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:41.010 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:41.270 [2024-06-10 11:17:38.394974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.270 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:41.529 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:41.529 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:41.529 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:41.529 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:41.529 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:41.788 [2024-06-10 11:17:38.824327] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:41.788 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:41.788 { 00:10:41.788 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:41.788 "listen_address": { 00:10:41.788 "trtype": "tcp", 00:10:41.788 "traddr": "", 00:10:41.788 "trsvcid": "4421" 00:10:41.788 }, 00:10:41.788 "method": "nvmf_subsystem_remove_listener", 00:10:41.788 "req_id": 1 00:10:41.788 } 00:10:41.788 Got JSON-RPC error response 00:10:41.788 response: 00:10:41.788 { 00:10:41.788 "code": -32602, 00:10:41.788 "message": "Invalid parameters" 00:10:41.788 }' 00:10:41.788 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:41.788 { 00:10:41.788 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:41.788 "listen_address": { 00:10:41.788 "trtype": "tcp", 00:10:41.788 "traddr": "", 00:10:41.788 "trsvcid": "4421" 00:10:41.788 }, 00:10:41.788 "method": "nvmf_subsystem_remove_listener", 00:10:41.788 "req_id": 1 00:10:41.788 } 00:10:41.788 Got JSON-RPC error response 00:10:41.788 response: 00:10:41.788 { 00:10:41.788 "code": -32602, 00:10:41.788 "message": "Invalid parameters" 00:10:41.788 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:41.788 11:17:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25459 -i 0 00:10:42.047 [2024-06-10 11:17:39.037035] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25459: invalid cntlid range [0-65519] 00:10:42.047 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:42.047 { 00:10:42.047 "nqn": "nqn.2016-06.io.spdk:cnode25459", 00:10:42.047 "min_cntlid": 0, 00:10:42.047 "method": "nvmf_create_subsystem", 00:10:42.047 "req_id": 1 00:10:42.047 } 00:10:42.047 Got JSON-RPC error response 00:10:42.047 response: 00:10:42.047 { 00:10:42.047 "code": -32602, 00:10:42.047 "message": "Invalid cntlid range [0-65519]" 00:10:42.047 }' 00:10:42.047 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:42.047 { 00:10:42.047 "nqn": "nqn.2016-06.io.spdk:cnode25459", 00:10:42.047 "min_cntlid": 0, 00:10:42.047 "method": "nvmf_create_subsystem", 00:10:42.047 "req_id": 1 00:10:42.047 } 00:10:42.047 Got JSON-RPC error response 00:10:42.047 response: 00:10:42.047 { 00:10:42.047 "code": -32602, 00:10:42.047 "message": "Invalid cntlid range [0-65519]" 00:10:42.047 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.047 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15863 -i 65520 00:10:42.047 [2024-06-10 11:17:39.245683] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15863: invalid cntlid range [65520-65519] 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:42.307 { 00:10:42.307 "nqn": "nqn.2016-06.io.spdk:cnode15863", 00:10:42.307 "min_cntlid": 65520, 00:10:42.307 "method": "nvmf_create_subsystem", 00:10:42.307 "req_id": 1 00:10:42.307 } 00:10:42.307 Got JSON-RPC error response 00:10:42.307 response: 00:10:42.307 { 00:10:42.307 "code": -32602, 00:10:42.307 "message": "Invalid cntlid range [65520-65519]" 00:10:42.307 }' 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:42.307 { 00:10:42.307 "nqn": "nqn.2016-06.io.spdk:cnode15863", 00:10:42.307 "min_cntlid": 65520, 00:10:42.307 "method": "nvmf_create_subsystem", 00:10:42.307 "req_id": 1 00:10:42.307 } 00:10:42.307 Got JSON-RPC error response 00:10:42.307 response: 00:10:42.307 { 00:10:42.307 "code": -32602, 00:10:42.307 "message": "Invalid cntlid range [65520-65519]" 00:10:42.307 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28071 -I 0 00:10:42.307 [2024-06-10 11:17:39.458386] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28071: invalid cntlid range [1-0] 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:42.307 { 00:10:42.307 "nqn": "nqn.2016-06.io.spdk:cnode28071", 00:10:42.307 "max_cntlid": 0, 00:10:42.307 "method": "nvmf_create_subsystem", 00:10:42.307 "req_id": 1 00:10:42.307 } 00:10:42.307 Got JSON-RPC error response 00:10:42.307 response: 00:10:42.307 { 00:10:42.307 "code": -32602, 00:10:42.307 "message": "Invalid cntlid range [1-0]" 00:10:42.307 }' 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:42.307 { 00:10:42.307 "nqn": "nqn.2016-06.io.spdk:cnode28071", 00:10:42.307 "max_cntlid": 0, 00:10:42.307 "method": "nvmf_create_subsystem", 00:10:42.307 "req_id": 1 00:10:42.307 } 00:10:42.307 Got JSON-RPC error response 00:10:42.307 response: 00:10:42.307 { 00:10:42.307 "code": -32602, 00:10:42.307 "message": "Invalid cntlid range [1-0]" 00:10:42.307 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.307 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27642 -I 65520 00:10:42.567 [2024-06-10 11:17:39.663043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27642: invalid cntlid range [1-65520] 00:10:42.567 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:42.567 { 00:10:42.567 "nqn": "nqn.2016-06.io.spdk:cnode27642", 00:10:42.567 "max_cntlid": 65520, 00:10:42.567 "method": "nvmf_create_subsystem", 00:10:42.567 "req_id": 1 00:10:42.567 } 00:10:42.567 Got JSON-RPC error response 00:10:42.567 response: 00:10:42.567 { 00:10:42.567 "code": -32602, 00:10:42.567 "message": "Invalid cntlid range [1-65520]" 00:10:42.567 }' 00:10:42.567 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:42.567 { 00:10:42.567 "nqn": "nqn.2016-06.io.spdk:cnode27642", 00:10:42.567 "max_cntlid": 65520, 00:10:42.567 "method": "nvmf_create_subsystem", 00:10:42.567 "req_id": 1 00:10:42.567 } 00:10:42.567 Got JSON-RPC error response 00:10:42.567 response: 00:10:42.567 { 00:10:42.567 "code": -32602, 00:10:42.567 "message": "Invalid cntlid range [1-65520]" 00:10:42.567 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.567 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11803 -i 6 -I 5 00:10:42.828 [2024-06-10 11:17:39.871728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11803: invalid cntlid range [6-5] 00:10:42.828 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:42.828 { 00:10:42.828 "nqn": "nqn.2016-06.io.spdk:cnode11803", 00:10:42.828 "min_cntlid": 6, 00:10:42.828 "max_cntlid": 5, 00:10:42.828 "method": "nvmf_create_subsystem", 00:10:42.828 "req_id": 1 00:10:42.828 } 00:10:42.828 Got JSON-RPC error response 00:10:42.828 response: 00:10:42.828 { 00:10:42.828 "code": -32602, 00:10:42.828 "message": "Invalid cntlid range [6-5]" 00:10:42.828 }' 00:10:42.828 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:42.828 { 00:10:42.828 "nqn": "nqn.2016-06.io.spdk:cnode11803", 00:10:42.828 "min_cntlid": 6, 00:10:42.828 "max_cntlid": 5, 00:10:42.828 "method": "nvmf_create_subsystem", 00:10:42.828 "req_id": 1 00:10:42.828 } 00:10:42.828 Got JSON-RPC error response 00:10:42.828 response: 00:10:42.828 { 00:10:42.828 "code": -32602, 00:10:42.828 "message": "Invalid cntlid range [6-5]" 00:10:42.828 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.828 11:17:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:42.828 11:17:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:42.828 { 00:10:42.828 "name": "foobar", 00:10:42.828 "method": "nvmf_delete_target", 00:10:42.828 "req_id": 1 00:10:42.828 } 00:10:42.828 Got JSON-RPC error response 00:10:42.828 response: 00:10:42.828 { 00:10:42.828 "code": -32602, 00:10:42.828 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:42.828 }' 00:10:42.828 11:17:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:42.828 { 00:10:42.828 "name": "foobar", 00:10:42.828 "method": "nvmf_delete_target", 00:10:42.828 "req_id": 1 00:10:42.828 } 00:10:42.828 Got JSON-RPC error response 00:10:42.828 response: 00:10:42.828 { 00:10:42.828 "code": -32602, 00:10:42.828 "message": "The specified target doesn't exist, cannot delete it." 00:10:42.828 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.829 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.829 rmmod nvme_tcp 00:10:42.829 rmmod nvme_fabrics 00:10:43.089 rmmod nvme_keyring 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1411677 ']' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 1411677 ']' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1411677' 00:10:43.089 killing process with pid 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 1411677 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.089 11:17:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.116 11:17:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:45.382 00:10:45.382 real 0m14.912s 00:10:45.382 user 0m22.048s 00:10:45.382 sys 0m7.117s 00:10:45.382 11:17:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:45.382 11:17:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:45.383 ************************************ 00:10:45.383 END TEST nvmf_invalid 00:10:45.383 ************************************ 00:10:45.383 11:17:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:45.383 11:17:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:45.383 11:17:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:45.383 11:17:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:45.383 ************************************ 00:10:45.383 START TEST nvmf_abort 00:10:45.383 ************************************ 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:45.383 * Looking for test storage... 00:10:45.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:45.383 11:17:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:53.517 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:53.517 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.517 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:53.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:53.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:10:53.518 00:10:53.518 --- 10.0.0.2 ping statistics --- 00:10:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.518 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:10:53.518 00:10:53.518 --- 10.0.0.1 ping statistics --- 00:10:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.518 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1416966 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1416966 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 1416966 ']' 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 11:17:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:53.518 [2024-06-10 11:17:50.709379] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:10:53.518 [2024-06-10 11:17:50.709429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.777 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.777 [2024-06-10 11:17:50.779572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.778 [2024-06-10 11:17:50.845164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.778 [2024-06-10 11:17:50.845203] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.778 [2024-06-10 11:17:50.845209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.778 [2024-06-10 11:17:50.845215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.778 [2024-06-10 11:17:50.845221] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.778 [2024-06-10 11:17:50.845321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.778 [2024-06-10 11:17:50.845448] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.778 [2024-06-10 11:17:50.845449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.346 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:54.346 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:10:54.346 11:17:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.346 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:54.346 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 [2024-06-10 11:17:51.607124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 Malloc0 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 Delay0 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.605 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.606 [2024-06-10 11:17:51.687779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.606 11:17:51 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:54.606 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.606 [2024-06-10 11:17:51.776575] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:57.142 Initializing NVMe Controllers 00:10:57.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:57.142 controller IO queue size 128 less than required 00:10:57.142 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:57.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:57.142 Initialization complete. Launching workers. 00:10:57.142 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37587 00:10:57.142 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37648, failed to submit 62 00:10:57.142 success 37591, unsuccess 57, failed 0 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.142 rmmod nvme_tcp 00:10:57.142 rmmod nvme_fabrics 00:10:57.142 rmmod nvme_keyring 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1416966 ']' 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1416966 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 1416966 ']' 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 1416966 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1416966 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1416966' 00:10:57.142 killing process with pid 1416966 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 1416966 00:10:57.142 11:17:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 1416966 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.142 11:17:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.052 11:17:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.052 00:10:59.052 real 0m13.748s 00:10:59.052 user 0m13.778s 00:10:59.052 sys 0m6.798s 00:10:59.052 11:17:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:59.052 11:17:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:59.053 ************************************ 00:10:59.053 END TEST nvmf_abort 00:10:59.053 ************************************ 00:10:59.053 11:17:56 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:59.053 11:17:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:59.053 11:17:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:59.053 11:17:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.053 ************************************ 00:10:59.053 START TEST nvmf_ns_hotplug_stress 00:10:59.053 ************************************ 00:10:59.053 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:59.313 * Looking for test storage... 00:10:59.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.313 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.314 11:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.533 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:07.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:07.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:07.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:07.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:11:07.534 00:11:07.534 --- 10.0.0.2 ping statistics --- 00:11:07.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.534 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:11:07.534 00:11:07.534 --- 10.0.0.1 ping statistics --- 00:11:07.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.534 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1422155 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1422155 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 1422155 ']' 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:07.534 11:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.534 [2024-06-10 11:18:04.629395] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:11:07.534 [2024-06-10 11:18:04.629462] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.535 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.535 [2024-06-10 11:18:04.702995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.795 [2024-06-10 11:18:04.773503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.795 [2024-06-10 11:18:04.773541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.795 [2024-06-10 11:18:04.773548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.795 [2024-06-10 11:18:04.773554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.795 [2024-06-10 11:18:04.773560] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.795 [2024-06-10 11:18:04.773660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.795 [2024-06-10 11:18:04.773808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.795 [2024-06-10 11:18:04.773810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:08.366 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:08.627 [2024-06-10 11:18:05.704742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.627 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:08.887 11:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.148 [2024-06-10 11:18:06.118114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.148 11:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.148 11:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:09.409 Malloc0 00:11:09.409 11:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:09.669 Delay0 00:11:09.669 11:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.929 11:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:09.929 NULL1 00:11:09.929 11:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:10.189 11:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1422544 00:11:10.189 11:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:10.189 11:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:10.189 11:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.189 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.573 Read completed with error (sct=0, sc=11) 00:11:11.573 11:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.573 11:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:11.573 11:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:11.833 true 00:11:11.833 11:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:11.833 11:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.780 11:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.780 11:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:12.780 11:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:13.041 true 00:11:13.041 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:13.041 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.300 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.559 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:13.559 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:13.559 true 00:11:13.559 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:13.559 11:18:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.939 11:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.939 11:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:14.939 11:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:14.939 true 00:11:15.199 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:15.199 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.199 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.459 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:15.459 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:15.718 true 00:11:15.718 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:15.718 11:18:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.656 11:18:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.916 11:18:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:16.916 11:18:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:17.176 true 00:11:17.176 11:18:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:17.176 11:18:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.117 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.117 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:18.117 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:18.378 true 00:11:18.378 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:18.378 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.639 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.639 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:18.639 11:18:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:18.899 true 00:11:18.899 11:18:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:18.899 11:18:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.837 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.101 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:20.101 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:20.361 true 00:11:20.361 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:20.361 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.620 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.880 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:20.880 11:18:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:20.880 true 00:11:20.880 11:18:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:20.880 11:18:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.262 11:18:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.263 11:18:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:22.263 11:18:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:22.522 true 00:11:22.522 11:18:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:22.522 11:18:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.463 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.463 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:23.463 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:23.724 true 00:11:23.724 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:23.724 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.983 11:18:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.243 11:18:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:24.243 11:18:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:24.243 true 00:11:24.243 11:18:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:24.243 11:18:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 11:18:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.623 11:18:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:25.623 11:18:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:25.623 true 00:11:25.623 11:18:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:25.623 11:18:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.564 11:18:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.824 11:18:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:26.824 11:18:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:26.824 true 00:11:27.084 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:27.084 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.084 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.392 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:27.393 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:27.654 true 00:11:27.654 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:27.654 11:18:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 11:18:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.857 11:18:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:28.857 11:18:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:29.118 true 00:11:29.118 11:18:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:29.118 11:18:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.058 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.058 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:30.058 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:30.318 true 00:11:30.318 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:30.318 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.578 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.838 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:30.838 11:18:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:30.838 true 00:11:30.838 11:18:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:30.838 11:18:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.219 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.219 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:32.219 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:32.479 true 00:11:32.479 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:32.479 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.739 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.739 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:32.739 11:18:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:32.999 true 00:11:32.999 11:18:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:32.999 11:18:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:34.380 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:34.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:34.380 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:34.380 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:34.639 true 00:11:34.639 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:34.639 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.899 11:18:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.158 11:18:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:35.158 11:18:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:35.158 true 00:11:35.158 11:18:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:35.159 11:18:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 11:18:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.601 11:18:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:36.601 11:18:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:36.860 true 00:11:36.860 11:18:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:36.860 11:18:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.799 11:18:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.799 11:18:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:37.799 11:18:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:38.058 true 00:11:38.058 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:38.058 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.318 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.318 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:38.318 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:38.578 true 00:11:38.578 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:38.578 11:18:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.960 11:18:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.960 11:18:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:39.960 11:18:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:40.220 true 00:11:40.220 11:18:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:40.220 11:18:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.790 Initializing NVMe Controllers 00:11:40.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.790 Controller IO queue size 128, less than required. 00:11:40.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:40.790 Controller IO queue size 128, less than required. 00:11:40.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:40.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:40.790 Initialization complete. Launching workers. 00:11:40.790 ======================================================== 00:11:40.790 Latency(us) 00:11:40.790 Device Information : IOPS MiB/s Average min max 00:11:40.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1922.70 0.94 45586.14 2117.05 1125894.27 00:11:40.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19856.03 9.70 6446.12 1341.36 461389.84 00:11:40.790 ======================================================== 00:11:40.791 Total : 21778.73 10.63 9901.53 1341.36 1125894.27 00:11:40.791 00:11:41.051 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.051 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:41.051 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:41.310 true 00:11:41.310 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1422544 00:11:41.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1422544) - No such process 00:11:41.310 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1422544 00:11:41.310 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.571 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:41.830 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:41.830 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:41.830 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:41.830 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:41.830 11:18:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:41.830 null0 00:11:41.830 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:41.830 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:41.830 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:42.090 null1 00:11:42.090 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.090 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.090 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:42.349 null2 00:11:42.349 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.349 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.349 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:42.609 null3 00:11:42.609 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.609 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.610 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:42.610 null4 00:11:42.610 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.610 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.610 11:18:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:42.869 null5 00:11:42.870 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.870 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.870 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:43.130 null6 00:11:43.130 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:43.130 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:43.130 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:43.390 null7 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.390 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1428616 1428617 1428620 1428623 1428626 1428629 1428632 1428635 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.391 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.652 11:18:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:43.913 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.173 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.174 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.434 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:44.694 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.694 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.694 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.694 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.694 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:44.695 11:18:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.955 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.216 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.477 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.738 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.739 11:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.999 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.259 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.519 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.780 11:18:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:47.041 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.301 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.302 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.606 rmmod nvme_tcp 00:11:47.606 rmmod nvme_fabrics 00:11:47.606 rmmod nvme_keyring 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1422155 ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 1422155 ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1422155' 00:11:47.606 killing process with pid 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 1422155 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.606 11:18:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.166 11:18:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.166 00:11:50.166 real 0m50.620s 00:11:50.166 user 3m16.986s 00:11:50.166 sys 0m16.303s 00:11:50.166 11:18:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.166 11:18:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.166 ************************************ 00:11:50.166 END TEST nvmf_ns_hotplug_stress 00:11:50.166 ************************************ 00:11:50.166 11:18:46 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.166 11:18:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:50.166 11:18:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.166 11:18:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.166 ************************************ 00:11:50.166 START TEST nvmf_connect_stress 00:11:50.166 ************************************ 00:11:50.166 11:18:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.166 * Looking for test storage... 00:11:50.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.166 11:18:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.167 11:18:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.167 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.167 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.167 11:18:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.167 11:18:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:58.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:58.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.346 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:58.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:58.347 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:11:58.347 00:11:58.347 --- 10.0.0.2 ping statistics --- 00:11:58.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.347 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:58.347 00:11:58.347 --- 10.0.0.1 ping statistics --- 00:11:58.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.347 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1433991 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1433991 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 1433991 ']' 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:58.347 11:18:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.347 [2024-06-10 11:18:55.396176] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:11:58.347 [2024-06-10 11:18:55.396223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.347 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.347 [2024-06-10 11:18:55.464671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.347 [2024-06-10 11:18:55.526149] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.347 [2024-06-10 11:18:55.526186] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.347 [2024-06-10 11:18:55.526193] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.347 [2024-06-10 11:18:55.526199] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.347 [2024-06-10 11:18:55.526204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.347 [2024-06-10 11:18:55.526302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.347 [2024-06-10 11:18:55.526451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.347 [2024-06-10 11:18:55.526452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 [2024-06-10 11:18:56.288030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 [2024-06-10 11:18:56.322963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.288 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 NULL1 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1434154 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.289 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.548 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.548 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:11:59.548 11:18:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.548 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.548 11:18:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.118 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.118 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:00.118 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.118 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.118 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.378 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.378 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:00.378 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.378 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.378 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.638 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.638 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:00.638 11:18:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.638 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.638 11:18:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.899 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:00.899 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.899 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.899 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.159 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.159 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:01.159 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.159 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.159 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.730 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.730 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:01.730 11:18:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.730 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.730 11:18:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.990 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.990 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:01.990 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.990 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.990 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.250 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:02.250 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:02.251 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.251 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:02.251 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:02.512 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:02.512 11:18:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.512 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:02.512 11:18:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.084 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.084 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:03.084 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.084 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.084 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.344 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.344 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:03.344 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.344 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.344 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.604 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:03.604 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.604 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.604 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.863 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.863 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:03.863 11:19:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.863 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.863 11:19:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.123 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:04.123 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.123 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.123 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.693 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.693 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:04.693 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.693 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.693 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.952 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.952 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:04.952 11:19:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.952 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.952 11:19:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.212 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:05.212 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:05.212 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.212 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:05.212 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.471 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:05.471 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:05.472 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.472 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:05.472 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.732 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:05.732 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:05.732 11:19:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.732 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:05.732 11:19:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.302 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.302 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:06.302 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.302 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.302 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.562 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.562 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:06.562 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.562 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.562 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.822 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.822 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:06.822 11:19:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.822 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.822 11:19:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.082 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.082 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:07.082 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.082 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.082 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.385 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.385 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:07.385 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.385 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.385 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.956 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.956 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:07.956 11:19:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.956 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.956 11:19:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.216 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.216 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:08.216 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.216 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.216 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.476 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.476 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:08.476 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.476 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.476 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.737 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.737 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:08.737 11:19:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.737 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.737 11:19:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.998 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.998 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:08.998 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.998 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.998 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.258 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1434154 00:12:09.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1434154) - No such process 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1434154 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.518 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.518 rmmod nvme_tcp 00:12:09.518 rmmod nvme_fabrics 00:12:09.518 rmmod nvme_keyring 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1433991 ']' 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1433991 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 1433991 ']' 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 1433991 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1433991 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1433991' 00:12:09.519 killing process with pid 1433991 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 1433991 00:12:09.519 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 1433991 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.779 11:19:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.780 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.780 11:19:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.693 11:19:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.693 00:12:11.693 real 0m21.910s 00:12:11.693 user 0m42.842s 00:12:11.693 sys 0m9.289s 00:12:11.693 11:19:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:11.693 11:19:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.693 ************************************ 00:12:11.693 END TEST nvmf_connect_stress 00:12:11.693 ************************************ 00:12:11.693 11:19:08 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:11.693 11:19:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:11.693 11:19:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:11.693 11:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.954 ************************************ 00:12:11.954 START TEST nvmf_fused_ordering 00:12:11.954 ************************************ 00:12:11.954 11:19:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:11.954 * Looking for test storage... 00:12:11.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.954 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.955 11:19:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:20.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:20.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:20.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:20.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.129 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.130 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:12:20.390 00:12:20.390 --- 10.0.0.2 ping statistics --- 00:12:20.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.390 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:12:20.390 00:12:20.390 --- 10.0.0.1 ping statistics --- 00:12:20.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.390 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1440294 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1440294 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 1440294 ']' 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:20.390 11:19:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:20.390 [2024-06-10 11:19:17.495676] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:12:20.390 [2024-06-10 11:19:17.495741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.390 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.390 [2024-06-10 11:19:17.572183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.650 [2024-06-10 11:19:17.642464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.650 [2024-06-10 11:19:17.642504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.650 [2024-06-10 11:19:17.642511] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.650 [2024-06-10 11:19:17.642516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.650 [2024-06-10 11:19:17.642521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.650 [2024-06-10 11:19:17.642546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 [2024-06-10 11:19:18.375998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 [2024-06-10 11:19:18.400158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 NULL1 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.220 11:19:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:21.480 [2024-06-10 11:19:18.464737] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:12:21.480 [2024-06-10 11:19:18.464798] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440518 ] 00:12:21.480 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.740 Attached to nqn.2016-06.io.spdk:cnode1 00:12:21.740 Namespace ID: 1 size: 1GB 00:12:21.740 fused_ordering(0) 00:12:21.740 fused_ordering(1) 00:12:21.740 fused_ordering(2) 00:12:21.740 fused_ordering(3) 00:12:21.740 fused_ordering(4) 00:12:21.740 fused_ordering(5) 00:12:21.740 fused_ordering(6) 00:12:21.740 fused_ordering(7) 00:12:21.740 fused_ordering(8) 00:12:21.740 fused_ordering(9) 00:12:21.740 fused_ordering(10) 00:12:21.740 fused_ordering(11) 00:12:21.740 fused_ordering(12) 00:12:21.740 fused_ordering(13) 00:12:21.740 fused_ordering(14) 00:12:21.740 fused_ordering(15) 00:12:21.740 fused_ordering(16) 00:12:21.740 fused_ordering(17) 00:12:21.740 fused_ordering(18) 00:12:21.740 fused_ordering(19) 00:12:21.740 fused_ordering(20) 00:12:21.740 fused_ordering(21) 00:12:21.740 fused_ordering(22) 00:12:21.740 fused_ordering(23) 00:12:21.740 fused_ordering(24) 00:12:21.740 fused_ordering(25) 00:12:21.740 fused_ordering(26) 00:12:21.740 fused_ordering(27) 00:12:21.740 fused_ordering(28) 00:12:21.740 fused_ordering(29) 00:12:21.740 fused_ordering(30) 00:12:21.740 fused_ordering(31) 00:12:21.740 fused_ordering(32) 00:12:21.740 fused_ordering(33) 00:12:21.740 fused_ordering(34) 00:12:21.740 fused_ordering(35) 00:12:21.740 fused_ordering(36) 00:12:21.740 fused_ordering(37) 00:12:21.740 fused_ordering(38) 00:12:21.740 fused_ordering(39) 00:12:21.740 fused_ordering(40) 00:12:21.740 fused_ordering(41) 00:12:21.740 fused_ordering(42) 00:12:21.740 fused_ordering(43) 00:12:21.740 fused_ordering(44) 00:12:21.740 fused_ordering(45) 00:12:21.740 fused_ordering(46) 00:12:21.740 fused_ordering(47) 00:12:21.740 fused_ordering(48) 00:12:21.740 fused_ordering(49) 00:12:21.740 fused_ordering(50) 00:12:21.740 fused_ordering(51) 00:12:21.740 fused_ordering(52) 00:12:21.740 fused_ordering(53) 00:12:21.740 fused_ordering(54) 00:12:21.740 fused_ordering(55) 00:12:21.740 fused_ordering(56) 00:12:21.740 fused_ordering(57) 00:12:21.740 fused_ordering(58) 00:12:21.740 fused_ordering(59) 00:12:21.740 fused_ordering(60) 00:12:21.740 fused_ordering(61) 00:12:21.740 fused_ordering(62) 00:12:21.740 fused_ordering(63) 00:12:21.740 fused_ordering(64) 00:12:21.740 fused_ordering(65) 00:12:21.740 fused_ordering(66) 00:12:21.740 fused_ordering(67) 00:12:21.740 fused_ordering(68) 00:12:21.740 fused_ordering(69) 00:12:21.740 fused_ordering(70) 00:12:21.740 fused_ordering(71) 00:12:21.740 fused_ordering(72) 00:12:21.740 fused_ordering(73) 00:12:21.740 fused_ordering(74) 00:12:21.740 fused_ordering(75) 00:12:21.740 fused_ordering(76) 00:12:21.740 fused_ordering(77) 00:12:21.740 fused_ordering(78) 00:12:21.740 fused_ordering(79) 00:12:21.740 fused_ordering(80) 00:12:21.740 fused_ordering(81) 00:12:21.740 fused_ordering(82) 00:12:21.740 fused_ordering(83) 00:12:21.740 fused_ordering(84) 00:12:21.740 fused_ordering(85) 00:12:21.740 fused_ordering(86) 00:12:21.740 fused_ordering(87) 00:12:21.740 fused_ordering(88) 00:12:21.740 fused_ordering(89) 00:12:21.740 fused_ordering(90) 00:12:21.740 fused_ordering(91) 00:12:21.740 fused_ordering(92) 00:12:21.740 fused_ordering(93) 00:12:21.740 fused_ordering(94) 00:12:21.740 fused_ordering(95) 00:12:21.740 fused_ordering(96) 00:12:21.740 fused_ordering(97) 00:12:21.740 fused_ordering(98) 00:12:21.740 fused_ordering(99) 00:12:21.740 fused_ordering(100) 00:12:21.740 fused_ordering(101) 00:12:21.740 fused_ordering(102) 00:12:21.740 fused_ordering(103) 00:12:21.740 fused_ordering(104) 00:12:21.740 fused_ordering(105) 00:12:21.740 fused_ordering(106) 00:12:21.740 fused_ordering(107) 00:12:21.740 fused_ordering(108) 00:12:21.740 fused_ordering(109) 00:12:21.740 fused_ordering(110) 00:12:21.740 fused_ordering(111) 00:12:21.740 fused_ordering(112) 00:12:21.740 fused_ordering(113) 00:12:21.740 fused_ordering(114) 00:12:21.740 fused_ordering(115) 00:12:21.740 fused_ordering(116) 00:12:21.740 fused_ordering(117) 00:12:21.740 fused_ordering(118) 00:12:21.740 fused_ordering(119) 00:12:21.740 fused_ordering(120) 00:12:21.740 fused_ordering(121) 00:12:21.740 fused_ordering(122) 00:12:21.740 fused_ordering(123) 00:12:21.740 fused_ordering(124) 00:12:21.740 fused_ordering(125) 00:12:21.740 fused_ordering(126) 00:12:21.740 fused_ordering(127) 00:12:21.740 fused_ordering(128) 00:12:21.740 fused_ordering(129) 00:12:21.740 fused_ordering(130) 00:12:21.740 fused_ordering(131) 00:12:21.740 fused_ordering(132) 00:12:21.740 fused_ordering(133) 00:12:21.740 fused_ordering(134) 00:12:21.740 fused_ordering(135) 00:12:21.740 fused_ordering(136) 00:12:21.740 fused_ordering(137) 00:12:21.740 fused_ordering(138) 00:12:21.740 fused_ordering(139) 00:12:21.740 fused_ordering(140) 00:12:21.740 fused_ordering(141) 00:12:21.740 fused_ordering(142) 00:12:21.740 fused_ordering(143) 00:12:21.740 fused_ordering(144) 00:12:21.740 fused_ordering(145) 00:12:21.740 fused_ordering(146) 00:12:21.740 fused_ordering(147) 00:12:21.740 fused_ordering(148) 00:12:21.740 fused_ordering(149) 00:12:21.740 fused_ordering(150) 00:12:21.740 fused_ordering(151) 00:12:21.740 fused_ordering(152) 00:12:21.740 fused_ordering(153) 00:12:21.740 fused_ordering(154) 00:12:21.740 fused_ordering(155) 00:12:21.740 fused_ordering(156) 00:12:21.740 fused_ordering(157) 00:12:21.740 fused_ordering(158) 00:12:21.740 fused_ordering(159) 00:12:21.740 fused_ordering(160) 00:12:21.740 fused_ordering(161) 00:12:21.740 fused_ordering(162) 00:12:21.740 fused_ordering(163) 00:12:21.740 fused_ordering(164) 00:12:21.740 fused_ordering(165) 00:12:21.740 fused_ordering(166) 00:12:21.740 fused_ordering(167) 00:12:21.740 fused_ordering(168) 00:12:21.740 fused_ordering(169) 00:12:21.740 fused_ordering(170) 00:12:21.740 fused_ordering(171) 00:12:21.740 fused_ordering(172) 00:12:21.740 fused_ordering(173) 00:12:21.740 fused_ordering(174) 00:12:21.740 fused_ordering(175) 00:12:21.740 fused_ordering(176) 00:12:21.740 fused_ordering(177) 00:12:21.740 fused_ordering(178) 00:12:21.740 fused_ordering(179) 00:12:21.740 fused_ordering(180) 00:12:21.740 fused_ordering(181) 00:12:21.740 fused_ordering(182) 00:12:21.740 fused_ordering(183) 00:12:21.740 fused_ordering(184) 00:12:21.740 fused_ordering(185) 00:12:21.740 fused_ordering(186) 00:12:21.740 fused_ordering(187) 00:12:21.740 fused_ordering(188) 00:12:21.740 fused_ordering(189) 00:12:21.740 fused_ordering(190) 00:12:21.741 fused_ordering(191) 00:12:21.741 fused_ordering(192) 00:12:21.741 fused_ordering(193) 00:12:21.741 fused_ordering(194) 00:12:21.741 fused_ordering(195) 00:12:21.741 fused_ordering(196) 00:12:21.741 fused_ordering(197) 00:12:21.741 fused_ordering(198) 00:12:21.741 fused_ordering(199) 00:12:21.741 fused_ordering(200) 00:12:21.741 fused_ordering(201) 00:12:21.741 fused_ordering(202) 00:12:21.741 fused_ordering(203) 00:12:21.741 fused_ordering(204) 00:12:21.741 fused_ordering(205) 00:12:22.312 fused_ordering(206) 00:12:22.312 fused_ordering(207) 00:12:22.312 fused_ordering(208) 00:12:22.312 fused_ordering(209) 00:12:22.312 fused_ordering(210) 00:12:22.312 fused_ordering(211) 00:12:22.312 fused_ordering(212) 00:12:22.312 fused_ordering(213) 00:12:22.312 fused_ordering(214) 00:12:22.312 fused_ordering(215) 00:12:22.312 fused_ordering(216) 00:12:22.312 fused_ordering(217) 00:12:22.312 fused_ordering(218) 00:12:22.312 fused_ordering(219) 00:12:22.312 fused_ordering(220) 00:12:22.312 fused_ordering(221) 00:12:22.312 fused_ordering(222) 00:12:22.312 fused_ordering(223) 00:12:22.312 fused_ordering(224) 00:12:22.312 fused_ordering(225) 00:12:22.312 fused_ordering(226) 00:12:22.312 fused_ordering(227) 00:12:22.312 fused_ordering(228) 00:12:22.312 fused_ordering(229) 00:12:22.312 fused_ordering(230) 00:12:22.312 fused_ordering(231) 00:12:22.312 fused_ordering(232) 00:12:22.312 fused_ordering(233) 00:12:22.312 fused_ordering(234) 00:12:22.312 fused_ordering(235) 00:12:22.312 fused_ordering(236) 00:12:22.312 fused_ordering(237) 00:12:22.312 fused_ordering(238) 00:12:22.312 fused_ordering(239) 00:12:22.312 fused_ordering(240) 00:12:22.312 fused_ordering(241) 00:12:22.312 fused_ordering(242) 00:12:22.312 fused_ordering(243) 00:12:22.312 fused_ordering(244) 00:12:22.312 fused_ordering(245) 00:12:22.312 fused_ordering(246) 00:12:22.312 fused_ordering(247) 00:12:22.312 fused_ordering(248) 00:12:22.312 fused_ordering(249) 00:12:22.312 fused_ordering(250) 00:12:22.312 fused_ordering(251) 00:12:22.312 fused_ordering(252) 00:12:22.312 fused_ordering(253) 00:12:22.312 fused_ordering(254) 00:12:22.312 fused_ordering(255) 00:12:22.312 fused_ordering(256) 00:12:22.312 fused_ordering(257) 00:12:22.312 fused_ordering(258) 00:12:22.312 fused_ordering(259) 00:12:22.312 fused_ordering(260) 00:12:22.312 fused_ordering(261) 00:12:22.312 fused_ordering(262) 00:12:22.312 fused_ordering(263) 00:12:22.312 fused_ordering(264) 00:12:22.312 fused_ordering(265) 00:12:22.312 fused_ordering(266) 00:12:22.312 fused_ordering(267) 00:12:22.312 fused_ordering(268) 00:12:22.312 fused_ordering(269) 00:12:22.312 fused_ordering(270) 00:12:22.312 fused_ordering(271) 00:12:22.312 fused_ordering(272) 00:12:22.312 fused_ordering(273) 00:12:22.312 fused_ordering(274) 00:12:22.313 fused_ordering(275) 00:12:22.313 fused_ordering(276) 00:12:22.313 fused_ordering(277) 00:12:22.313 fused_ordering(278) 00:12:22.313 fused_ordering(279) 00:12:22.313 fused_ordering(280) 00:12:22.313 fused_ordering(281) 00:12:22.313 fused_ordering(282) 00:12:22.313 fused_ordering(283) 00:12:22.313 fused_ordering(284) 00:12:22.313 fused_ordering(285) 00:12:22.313 fused_ordering(286) 00:12:22.313 fused_ordering(287) 00:12:22.313 fused_ordering(288) 00:12:22.313 fused_ordering(289) 00:12:22.313 fused_ordering(290) 00:12:22.313 fused_ordering(291) 00:12:22.313 fused_ordering(292) 00:12:22.313 fused_ordering(293) 00:12:22.313 fused_ordering(294) 00:12:22.313 fused_ordering(295) 00:12:22.313 fused_ordering(296) 00:12:22.313 fused_ordering(297) 00:12:22.313 fused_ordering(298) 00:12:22.313 fused_ordering(299) 00:12:22.313 fused_ordering(300) 00:12:22.313 fused_ordering(301) 00:12:22.313 fused_ordering(302) 00:12:22.313 fused_ordering(303) 00:12:22.313 fused_ordering(304) 00:12:22.313 fused_ordering(305) 00:12:22.313 fused_ordering(306) 00:12:22.313 fused_ordering(307) 00:12:22.313 fused_ordering(308) 00:12:22.313 fused_ordering(309) 00:12:22.313 fused_ordering(310) 00:12:22.313 fused_ordering(311) 00:12:22.313 fused_ordering(312) 00:12:22.313 fused_ordering(313) 00:12:22.313 fused_ordering(314) 00:12:22.313 fused_ordering(315) 00:12:22.313 fused_ordering(316) 00:12:22.313 fused_ordering(317) 00:12:22.313 fused_ordering(318) 00:12:22.313 fused_ordering(319) 00:12:22.313 fused_ordering(320) 00:12:22.313 fused_ordering(321) 00:12:22.313 fused_ordering(322) 00:12:22.313 fused_ordering(323) 00:12:22.313 fused_ordering(324) 00:12:22.313 fused_ordering(325) 00:12:22.313 fused_ordering(326) 00:12:22.313 fused_ordering(327) 00:12:22.313 fused_ordering(328) 00:12:22.313 fused_ordering(329) 00:12:22.313 fused_ordering(330) 00:12:22.313 fused_ordering(331) 00:12:22.313 fused_ordering(332) 00:12:22.313 fused_ordering(333) 00:12:22.313 fused_ordering(334) 00:12:22.313 fused_ordering(335) 00:12:22.313 fused_ordering(336) 00:12:22.313 fused_ordering(337) 00:12:22.313 fused_ordering(338) 00:12:22.313 fused_ordering(339) 00:12:22.313 fused_ordering(340) 00:12:22.313 fused_ordering(341) 00:12:22.313 fused_ordering(342) 00:12:22.313 fused_ordering(343) 00:12:22.313 fused_ordering(344) 00:12:22.313 fused_ordering(345) 00:12:22.313 fused_ordering(346) 00:12:22.313 fused_ordering(347) 00:12:22.313 fused_ordering(348) 00:12:22.313 fused_ordering(349) 00:12:22.313 fused_ordering(350) 00:12:22.313 fused_ordering(351) 00:12:22.313 fused_ordering(352) 00:12:22.313 fused_ordering(353) 00:12:22.313 fused_ordering(354) 00:12:22.313 fused_ordering(355) 00:12:22.313 fused_ordering(356) 00:12:22.313 fused_ordering(357) 00:12:22.313 fused_ordering(358) 00:12:22.313 fused_ordering(359) 00:12:22.313 fused_ordering(360) 00:12:22.313 fused_ordering(361) 00:12:22.313 fused_ordering(362) 00:12:22.313 fused_ordering(363) 00:12:22.313 fused_ordering(364) 00:12:22.313 fused_ordering(365) 00:12:22.313 fused_ordering(366) 00:12:22.313 fused_ordering(367) 00:12:22.313 fused_ordering(368) 00:12:22.313 fused_ordering(369) 00:12:22.313 fused_ordering(370) 00:12:22.313 fused_ordering(371) 00:12:22.313 fused_ordering(372) 00:12:22.313 fused_ordering(373) 00:12:22.313 fused_ordering(374) 00:12:22.313 fused_ordering(375) 00:12:22.313 fused_ordering(376) 00:12:22.313 fused_ordering(377) 00:12:22.313 fused_ordering(378) 00:12:22.313 fused_ordering(379) 00:12:22.313 fused_ordering(380) 00:12:22.313 fused_ordering(381) 00:12:22.313 fused_ordering(382) 00:12:22.313 fused_ordering(383) 00:12:22.313 fused_ordering(384) 00:12:22.313 fused_ordering(385) 00:12:22.313 fused_ordering(386) 00:12:22.313 fused_ordering(387) 00:12:22.313 fused_ordering(388) 00:12:22.313 fused_ordering(389) 00:12:22.313 fused_ordering(390) 00:12:22.313 fused_ordering(391) 00:12:22.313 fused_ordering(392) 00:12:22.313 fused_ordering(393) 00:12:22.313 fused_ordering(394) 00:12:22.313 fused_ordering(395) 00:12:22.313 fused_ordering(396) 00:12:22.313 fused_ordering(397) 00:12:22.313 fused_ordering(398) 00:12:22.313 fused_ordering(399) 00:12:22.313 fused_ordering(400) 00:12:22.313 fused_ordering(401) 00:12:22.313 fused_ordering(402) 00:12:22.313 fused_ordering(403) 00:12:22.313 fused_ordering(404) 00:12:22.313 fused_ordering(405) 00:12:22.313 fused_ordering(406) 00:12:22.313 fused_ordering(407) 00:12:22.313 fused_ordering(408) 00:12:22.313 fused_ordering(409) 00:12:22.313 fused_ordering(410) 00:12:22.575 fused_ordering(411) 00:12:22.575 fused_ordering(412) 00:12:22.575 fused_ordering(413) 00:12:22.575 fused_ordering(414) 00:12:22.575 fused_ordering(415) 00:12:22.575 fused_ordering(416) 00:12:22.575 fused_ordering(417) 00:12:22.575 fused_ordering(418) 00:12:22.575 fused_ordering(419) 00:12:22.575 fused_ordering(420) 00:12:22.575 fused_ordering(421) 00:12:22.575 fused_ordering(422) 00:12:22.575 fused_ordering(423) 00:12:22.575 fused_ordering(424) 00:12:22.575 fused_ordering(425) 00:12:22.575 fused_ordering(426) 00:12:22.575 fused_ordering(427) 00:12:22.575 fused_ordering(428) 00:12:22.575 fused_ordering(429) 00:12:22.575 fused_ordering(430) 00:12:22.575 fused_ordering(431) 00:12:22.575 fused_ordering(432) 00:12:22.575 fused_ordering(433) 00:12:22.575 fused_ordering(434) 00:12:22.575 fused_ordering(435) 00:12:22.575 fused_ordering(436) 00:12:22.575 fused_ordering(437) 00:12:22.575 fused_ordering(438) 00:12:22.575 fused_ordering(439) 00:12:22.575 fused_ordering(440) 00:12:22.575 fused_ordering(441) 00:12:22.575 fused_ordering(442) 00:12:22.575 fused_ordering(443) 00:12:22.575 fused_ordering(444) 00:12:22.575 fused_ordering(445) 00:12:22.575 fused_ordering(446) 00:12:22.575 fused_ordering(447) 00:12:22.575 fused_ordering(448) 00:12:22.575 fused_ordering(449) 00:12:22.575 fused_ordering(450) 00:12:22.575 fused_ordering(451) 00:12:22.575 fused_ordering(452) 00:12:22.575 fused_ordering(453) 00:12:22.575 fused_ordering(454) 00:12:22.575 fused_ordering(455) 00:12:22.575 fused_ordering(456) 00:12:22.575 fused_ordering(457) 00:12:22.575 fused_ordering(458) 00:12:22.575 fused_ordering(459) 00:12:22.575 fused_ordering(460) 00:12:22.575 fused_ordering(461) 00:12:22.575 fused_ordering(462) 00:12:22.575 fused_ordering(463) 00:12:22.575 fused_ordering(464) 00:12:22.575 fused_ordering(465) 00:12:22.575 fused_ordering(466) 00:12:22.575 fused_ordering(467) 00:12:22.575 fused_ordering(468) 00:12:22.575 fused_ordering(469) 00:12:22.575 fused_ordering(470) 00:12:22.575 fused_ordering(471) 00:12:22.575 fused_ordering(472) 00:12:22.575 fused_ordering(473) 00:12:22.575 fused_ordering(474) 00:12:22.575 fused_ordering(475) 00:12:22.575 fused_ordering(476) 00:12:22.575 fused_ordering(477) 00:12:22.575 fused_ordering(478) 00:12:22.575 fused_ordering(479) 00:12:22.575 fused_ordering(480) 00:12:22.575 fused_ordering(481) 00:12:22.575 fused_ordering(482) 00:12:22.575 fused_ordering(483) 00:12:22.575 fused_ordering(484) 00:12:22.575 fused_ordering(485) 00:12:22.575 fused_ordering(486) 00:12:22.575 fused_ordering(487) 00:12:22.575 fused_ordering(488) 00:12:22.575 fused_ordering(489) 00:12:22.575 fused_ordering(490) 00:12:22.575 fused_ordering(491) 00:12:22.575 fused_ordering(492) 00:12:22.575 fused_ordering(493) 00:12:22.575 fused_ordering(494) 00:12:22.575 fused_ordering(495) 00:12:22.575 fused_ordering(496) 00:12:22.575 fused_ordering(497) 00:12:22.575 fused_ordering(498) 00:12:22.575 fused_ordering(499) 00:12:22.575 fused_ordering(500) 00:12:22.575 fused_ordering(501) 00:12:22.575 fused_ordering(502) 00:12:22.575 fused_ordering(503) 00:12:22.575 fused_ordering(504) 00:12:22.575 fused_ordering(505) 00:12:22.575 fused_ordering(506) 00:12:22.575 fused_ordering(507) 00:12:22.575 fused_ordering(508) 00:12:22.575 fused_ordering(509) 00:12:22.575 fused_ordering(510) 00:12:22.575 fused_ordering(511) 00:12:22.575 fused_ordering(512) 00:12:22.575 fused_ordering(513) 00:12:22.575 fused_ordering(514) 00:12:22.575 fused_ordering(515) 00:12:22.575 fused_ordering(516) 00:12:22.575 fused_ordering(517) 00:12:22.575 fused_ordering(518) 00:12:22.575 fused_ordering(519) 00:12:22.575 fused_ordering(520) 00:12:22.575 fused_ordering(521) 00:12:22.575 fused_ordering(522) 00:12:22.575 fused_ordering(523) 00:12:22.575 fused_ordering(524) 00:12:22.575 fused_ordering(525) 00:12:22.575 fused_ordering(526) 00:12:22.575 fused_ordering(527) 00:12:22.575 fused_ordering(528) 00:12:22.575 fused_ordering(529) 00:12:22.575 fused_ordering(530) 00:12:22.575 fused_ordering(531) 00:12:22.575 fused_ordering(532) 00:12:22.575 fused_ordering(533) 00:12:22.575 fused_ordering(534) 00:12:22.575 fused_ordering(535) 00:12:22.575 fused_ordering(536) 00:12:22.575 fused_ordering(537) 00:12:22.575 fused_ordering(538) 00:12:22.575 fused_ordering(539) 00:12:22.575 fused_ordering(540) 00:12:22.575 fused_ordering(541) 00:12:22.575 fused_ordering(542) 00:12:22.575 fused_ordering(543) 00:12:22.575 fused_ordering(544) 00:12:22.575 fused_ordering(545) 00:12:22.575 fused_ordering(546) 00:12:22.575 fused_ordering(547) 00:12:22.575 fused_ordering(548) 00:12:22.575 fused_ordering(549) 00:12:22.575 fused_ordering(550) 00:12:22.575 fused_ordering(551) 00:12:22.575 fused_ordering(552) 00:12:22.575 fused_ordering(553) 00:12:22.575 fused_ordering(554) 00:12:22.575 fused_ordering(555) 00:12:22.575 fused_ordering(556) 00:12:22.575 fused_ordering(557) 00:12:22.575 fused_ordering(558) 00:12:22.575 fused_ordering(559) 00:12:22.575 fused_ordering(560) 00:12:22.575 fused_ordering(561) 00:12:22.575 fused_ordering(562) 00:12:22.575 fused_ordering(563) 00:12:22.575 fused_ordering(564) 00:12:22.575 fused_ordering(565) 00:12:22.575 fused_ordering(566) 00:12:22.575 fused_ordering(567) 00:12:22.575 fused_ordering(568) 00:12:22.575 fused_ordering(569) 00:12:22.575 fused_ordering(570) 00:12:22.575 fused_ordering(571) 00:12:22.575 fused_ordering(572) 00:12:22.575 fused_ordering(573) 00:12:22.575 fused_ordering(574) 00:12:22.575 fused_ordering(575) 00:12:22.575 fused_ordering(576) 00:12:22.575 fused_ordering(577) 00:12:22.575 fused_ordering(578) 00:12:22.575 fused_ordering(579) 00:12:22.575 fused_ordering(580) 00:12:22.575 fused_ordering(581) 00:12:22.575 fused_ordering(582) 00:12:22.575 fused_ordering(583) 00:12:22.575 fused_ordering(584) 00:12:22.575 fused_ordering(585) 00:12:22.575 fused_ordering(586) 00:12:22.575 fused_ordering(587) 00:12:22.575 fused_ordering(588) 00:12:22.575 fused_ordering(589) 00:12:22.575 fused_ordering(590) 00:12:22.575 fused_ordering(591) 00:12:22.575 fused_ordering(592) 00:12:22.575 fused_ordering(593) 00:12:22.575 fused_ordering(594) 00:12:22.575 fused_ordering(595) 00:12:22.575 fused_ordering(596) 00:12:22.575 fused_ordering(597) 00:12:22.575 fused_ordering(598) 00:12:22.575 fused_ordering(599) 00:12:22.575 fused_ordering(600) 00:12:22.575 fused_ordering(601) 00:12:22.576 fused_ordering(602) 00:12:22.576 fused_ordering(603) 00:12:22.576 fused_ordering(604) 00:12:22.576 fused_ordering(605) 00:12:22.576 fused_ordering(606) 00:12:22.576 fused_ordering(607) 00:12:22.576 fused_ordering(608) 00:12:22.576 fused_ordering(609) 00:12:22.576 fused_ordering(610) 00:12:22.576 fused_ordering(611) 00:12:22.576 fused_ordering(612) 00:12:22.576 fused_ordering(613) 00:12:22.576 fused_ordering(614) 00:12:22.576 fused_ordering(615) 00:12:23.149 fused_ordering(616) 00:12:23.149 fused_ordering(617) 00:12:23.149 fused_ordering(618) 00:12:23.149 fused_ordering(619) 00:12:23.149 fused_ordering(620) 00:12:23.149 fused_ordering(621) 00:12:23.149 fused_ordering(622) 00:12:23.149 fused_ordering(623) 00:12:23.149 fused_ordering(624) 00:12:23.149 fused_ordering(625) 00:12:23.149 fused_ordering(626) 00:12:23.149 fused_ordering(627) 00:12:23.149 fused_ordering(628) 00:12:23.149 fused_ordering(629) 00:12:23.149 fused_ordering(630) 00:12:23.149 fused_ordering(631) 00:12:23.149 fused_ordering(632) 00:12:23.149 fused_ordering(633) 00:12:23.149 fused_ordering(634) 00:12:23.149 fused_ordering(635) 00:12:23.149 fused_ordering(636) 00:12:23.149 fused_ordering(637) 00:12:23.149 fused_ordering(638) 00:12:23.149 fused_ordering(639) 00:12:23.149 fused_ordering(640) 00:12:23.149 fused_ordering(641) 00:12:23.149 fused_ordering(642) 00:12:23.149 fused_ordering(643) 00:12:23.149 fused_ordering(644) 00:12:23.149 fused_ordering(645) 00:12:23.149 fused_ordering(646) 00:12:23.149 fused_ordering(647) 00:12:23.149 fused_ordering(648) 00:12:23.149 fused_ordering(649) 00:12:23.149 fused_ordering(650) 00:12:23.149 fused_ordering(651) 00:12:23.149 fused_ordering(652) 00:12:23.149 fused_ordering(653) 00:12:23.149 fused_ordering(654) 00:12:23.149 fused_ordering(655) 00:12:23.149 fused_ordering(656) 00:12:23.149 fused_ordering(657) 00:12:23.149 fused_ordering(658) 00:12:23.149 fused_ordering(659) 00:12:23.149 fused_ordering(660) 00:12:23.149 fused_ordering(661) 00:12:23.149 fused_ordering(662) 00:12:23.149 fused_ordering(663) 00:12:23.149 fused_ordering(664) 00:12:23.149 fused_ordering(665) 00:12:23.149 fused_ordering(666) 00:12:23.149 fused_ordering(667) 00:12:23.149 fused_ordering(668) 00:12:23.149 fused_ordering(669) 00:12:23.149 fused_ordering(670) 00:12:23.149 fused_ordering(671) 00:12:23.149 fused_ordering(672) 00:12:23.149 fused_ordering(673) 00:12:23.149 fused_ordering(674) 00:12:23.149 fused_ordering(675) 00:12:23.149 fused_ordering(676) 00:12:23.149 fused_ordering(677) 00:12:23.149 fused_ordering(678) 00:12:23.149 fused_ordering(679) 00:12:23.149 fused_ordering(680) 00:12:23.149 fused_ordering(681) 00:12:23.149 fused_ordering(682) 00:12:23.149 fused_ordering(683) 00:12:23.149 fused_ordering(684) 00:12:23.149 fused_ordering(685) 00:12:23.149 fused_ordering(686) 00:12:23.149 fused_ordering(687) 00:12:23.149 fused_ordering(688) 00:12:23.149 fused_ordering(689) 00:12:23.149 fused_ordering(690) 00:12:23.149 fused_ordering(691) 00:12:23.149 fused_ordering(692) 00:12:23.149 fused_ordering(693) 00:12:23.149 fused_ordering(694) 00:12:23.149 fused_ordering(695) 00:12:23.149 fused_ordering(696) 00:12:23.149 fused_ordering(697) 00:12:23.149 fused_ordering(698) 00:12:23.149 fused_ordering(699) 00:12:23.149 fused_ordering(700) 00:12:23.149 fused_ordering(701) 00:12:23.149 fused_ordering(702) 00:12:23.149 fused_ordering(703) 00:12:23.149 fused_ordering(704) 00:12:23.149 fused_ordering(705) 00:12:23.149 fused_ordering(706) 00:12:23.149 fused_ordering(707) 00:12:23.149 fused_ordering(708) 00:12:23.149 fused_ordering(709) 00:12:23.149 fused_ordering(710) 00:12:23.149 fused_ordering(711) 00:12:23.149 fused_ordering(712) 00:12:23.149 fused_ordering(713) 00:12:23.149 fused_ordering(714) 00:12:23.149 fused_ordering(715) 00:12:23.149 fused_ordering(716) 00:12:23.149 fused_ordering(717) 00:12:23.149 fused_ordering(718) 00:12:23.149 fused_ordering(719) 00:12:23.149 fused_ordering(720) 00:12:23.149 fused_ordering(721) 00:12:23.149 fused_ordering(722) 00:12:23.149 fused_ordering(723) 00:12:23.149 fused_ordering(724) 00:12:23.149 fused_ordering(725) 00:12:23.149 fused_ordering(726) 00:12:23.149 fused_ordering(727) 00:12:23.149 fused_ordering(728) 00:12:23.149 fused_ordering(729) 00:12:23.149 fused_ordering(730) 00:12:23.149 fused_ordering(731) 00:12:23.149 fused_ordering(732) 00:12:23.149 fused_ordering(733) 00:12:23.149 fused_ordering(734) 00:12:23.149 fused_ordering(735) 00:12:23.149 fused_ordering(736) 00:12:23.149 fused_ordering(737) 00:12:23.149 fused_ordering(738) 00:12:23.149 fused_ordering(739) 00:12:23.149 fused_ordering(740) 00:12:23.149 fused_ordering(741) 00:12:23.149 fused_ordering(742) 00:12:23.149 fused_ordering(743) 00:12:23.149 fused_ordering(744) 00:12:23.149 fused_ordering(745) 00:12:23.149 fused_ordering(746) 00:12:23.149 fused_ordering(747) 00:12:23.149 fused_ordering(748) 00:12:23.149 fused_ordering(749) 00:12:23.149 fused_ordering(750) 00:12:23.149 fused_ordering(751) 00:12:23.149 fused_ordering(752) 00:12:23.149 fused_ordering(753) 00:12:23.149 fused_ordering(754) 00:12:23.149 fused_ordering(755) 00:12:23.149 fused_ordering(756) 00:12:23.149 fused_ordering(757) 00:12:23.149 fused_ordering(758) 00:12:23.149 fused_ordering(759) 00:12:23.149 fused_ordering(760) 00:12:23.149 fused_ordering(761) 00:12:23.149 fused_ordering(762) 00:12:23.149 fused_ordering(763) 00:12:23.149 fused_ordering(764) 00:12:23.149 fused_ordering(765) 00:12:23.149 fused_ordering(766) 00:12:23.149 fused_ordering(767) 00:12:23.149 fused_ordering(768) 00:12:23.149 fused_ordering(769) 00:12:23.149 fused_ordering(770) 00:12:23.149 fused_ordering(771) 00:12:23.149 fused_ordering(772) 00:12:23.149 fused_ordering(773) 00:12:23.149 fused_ordering(774) 00:12:23.149 fused_ordering(775) 00:12:23.149 fused_ordering(776) 00:12:23.149 fused_ordering(777) 00:12:23.149 fused_ordering(778) 00:12:23.149 fused_ordering(779) 00:12:23.149 fused_ordering(780) 00:12:23.149 fused_ordering(781) 00:12:23.149 fused_ordering(782) 00:12:23.149 fused_ordering(783) 00:12:23.149 fused_ordering(784) 00:12:23.149 fused_ordering(785) 00:12:23.149 fused_ordering(786) 00:12:23.149 fused_ordering(787) 00:12:23.149 fused_ordering(788) 00:12:23.149 fused_ordering(789) 00:12:23.149 fused_ordering(790) 00:12:23.149 fused_ordering(791) 00:12:23.149 fused_ordering(792) 00:12:23.149 fused_ordering(793) 00:12:23.149 fused_ordering(794) 00:12:23.149 fused_ordering(795) 00:12:23.149 fused_ordering(796) 00:12:23.149 fused_ordering(797) 00:12:23.149 fused_ordering(798) 00:12:23.149 fused_ordering(799) 00:12:23.149 fused_ordering(800) 00:12:23.149 fused_ordering(801) 00:12:23.149 fused_ordering(802) 00:12:23.149 fused_ordering(803) 00:12:23.149 fused_ordering(804) 00:12:23.149 fused_ordering(805) 00:12:23.149 fused_ordering(806) 00:12:23.149 fused_ordering(807) 00:12:23.149 fused_ordering(808) 00:12:23.149 fused_ordering(809) 00:12:23.149 fused_ordering(810) 00:12:23.149 fused_ordering(811) 00:12:23.149 fused_ordering(812) 00:12:23.149 fused_ordering(813) 00:12:23.149 fused_ordering(814) 00:12:23.149 fused_ordering(815) 00:12:23.149 fused_ordering(816) 00:12:23.149 fused_ordering(817) 00:12:23.149 fused_ordering(818) 00:12:23.149 fused_ordering(819) 00:12:23.149 fused_ordering(820) 00:12:23.721 fused_ordering(821) 00:12:23.721 fused_ordering(822) 00:12:23.721 fused_ordering(823) 00:12:23.721 fused_ordering(824) 00:12:23.721 fused_ordering(825) 00:12:23.721 fused_ordering(826) 00:12:23.721 fused_ordering(827) 00:12:23.721 fused_ordering(828) 00:12:23.721 fused_ordering(829) 00:12:23.721 fused_ordering(830) 00:12:23.721 fused_ordering(831) 00:12:23.721 fused_ordering(832) 00:12:23.721 fused_ordering(833) 00:12:23.721 fused_ordering(834) 00:12:23.721 fused_ordering(835) 00:12:23.721 fused_ordering(836) 00:12:23.721 fused_ordering(837) 00:12:23.721 fused_ordering(838) 00:12:23.721 fused_ordering(839) 00:12:23.721 fused_ordering(840) 00:12:23.721 fused_ordering(841) 00:12:23.721 fused_ordering(842) 00:12:23.721 fused_ordering(843) 00:12:23.721 fused_ordering(844) 00:12:23.721 fused_ordering(845) 00:12:23.721 fused_ordering(846) 00:12:23.721 fused_ordering(847) 00:12:23.721 fused_ordering(848) 00:12:23.721 fused_ordering(849) 00:12:23.721 fused_ordering(850) 00:12:23.721 fused_ordering(851) 00:12:23.721 fused_ordering(852) 00:12:23.721 fused_ordering(853) 00:12:23.721 fused_ordering(854) 00:12:23.721 fused_ordering(855) 00:12:23.721 fused_ordering(856) 00:12:23.721 fused_ordering(857) 00:12:23.721 fused_ordering(858) 00:12:23.721 fused_ordering(859) 00:12:23.721 fused_ordering(860) 00:12:23.721 fused_ordering(861) 00:12:23.721 fused_ordering(862) 00:12:23.721 fused_ordering(863) 00:12:23.721 fused_ordering(864) 00:12:23.721 fused_ordering(865) 00:12:23.721 fused_ordering(866) 00:12:23.721 fused_ordering(867) 00:12:23.721 fused_ordering(868) 00:12:23.721 fused_ordering(869) 00:12:23.721 fused_ordering(870) 00:12:23.721 fused_ordering(871) 00:12:23.721 fused_ordering(872) 00:12:23.721 fused_ordering(873) 00:12:23.721 fused_ordering(874) 00:12:23.721 fused_ordering(875) 00:12:23.721 fused_ordering(876) 00:12:23.721 fused_ordering(877) 00:12:23.721 fused_ordering(878) 00:12:23.721 fused_ordering(879) 00:12:23.721 fused_ordering(880) 00:12:23.721 fused_ordering(881) 00:12:23.721 fused_ordering(882) 00:12:23.721 fused_ordering(883) 00:12:23.721 fused_ordering(884) 00:12:23.721 fused_ordering(885) 00:12:23.721 fused_ordering(886) 00:12:23.721 fused_ordering(887) 00:12:23.721 fused_ordering(888) 00:12:23.721 fused_ordering(889) 00:12:23.721 fused_ordering(890) 00:12:23.721 fused_ordering(891) 00:12:23.721 fused_ordering(892) 00:12:23.721 fused_ordering(893) 00:12:23.721 fused_ordering(894) 00:12:23.721 fused_ordering(895) 00:12:23.721 fused_ordering(896) 00:12:23.721 fused_ordering(897) 00:12:23.721 fused_ordering(898) 00:12:23.721 fused_ordering(899) 00:12:23.721 fused_ordering(900) 00:12:23.721 fused_ordering(901) 00:12:23.721 fused_ordering(902) 00:12:23.721 fused_ordering(903) 00:12:23.721 fused_ordering(904) 00:12:23.721 fused_ordering(905) 00:12:23.721 fused_ordering(906) 00:12:23.721 fused_ordering(907) 00:12:23.721 fused_ordering(908) 00:12:23.721 fused_ordering(909) 00:12:23.721 fused_ordering(910) 00:12:23.721 fused_ordering(911) 00:12:23.721 fused_ordering(912) 00:12:23.721 fused_ordering(913) 00:12:23.721 fused_ordering(914) 00:12:23.721 fused_ordering(915) 00:12:23.721 fused_ordering(916) 00:12:23.721 fused_ordering(917) 00:12:23.721 fused_ordering(918) 00:12:23.721 fused_ordering(919) 00:12:23.721 fused_ordering(920) 00:12:23.721 fused_ordering(921) 00:12:23.721 fused_ordering(922) 00:12:23.721 fused_ordering(923) 00:12:23.721 fused_ordering(924) 00:12:23.721 fused_ordering(925) 00:12:23.721 fused_ordering(926) 00:12:23.721 fused_ordering(927) 00:12:23.721 fused_ordering(928) 00:12:23.721 fused_ordering(929) 00:12:23.721 fused_ordering(930) 00:12:23.721 fused_ordering(931) 00:12:23.721 fused_ordering(932) 00:12:23.721 fused_ordering(933) 00:12:23.721 fused_ordering(934) 00:12:23.721 fused_ordering(935) 00:12:23.721 fused_ordering(936) 00:12:23.721 fused_ordering(937) 00:12:23.721 fused_ordering(938) 00:12:23.721 fused_ordering(939) 00:12:23.721 fused_ordering(940) 00:12:23.721 fused_ordering(941) 00:12:23.721 fused_ordering(942) 00:12:23.721 fused_ordering(943) 00:12:23.721 fused_ordering(944) 00:12:23.721 fused_ordering(945) 00:12:23.721 fused_ordering(946) 00:12:23.721 fused_ordering(947) 00:12:23.721 fused_ordering(948) 00:12:23.721 fused_ordering(949) 00:12:23.721 fused_ordering(950) 00:12:23.721 fused_ordering(951) 00:12:23.721 fused_ordering(952) 00:12:23.721 fused_ordering(953) 00:12:23.721 fused_ordering(954) 00:12:23.721 fused_ordering(955) 00:12:23.721 fused_ordering(956) 00:12:23.721 fused_ordering(957) 00:12:23.721 fused_ordering(958) 00:12:23.721 fused_ordering(959) 00:12:23.721 fused_ordering(960) 00:12:23.721 fused_ordering(961) 00:12:23.721 fused_ordering(962) 00:12:23.721 fused_ordering(963) 00:12:23.721 fused_ordering(964) 00:12:23.721 fused_ordering(965) 00:12:23.721 fused_ordering(966) 00:12:23.721 fused_ordering(967) 00:12:23.721 fused_ordering(968) 00:12:23.721 fused_ordering(969) 00:12:23.721 fused_ordering(970) 00:12:23.721 fused_ordering(971) 00:12:23.721 fused_ordering(972) 00:12:23.721 fused_ordering(973) 00:12:23.721 fused_ordering(974) 00:12:23.721 fused_ordering(975) 00:12:23.721 fused_ordering(976) 00:12:23.721 fused_ordering(977) 00:12:23.721 fused_ordering(978) 00:12:23.721 fused_ordering(979) 00:12:23.721 fused_ordering(980) 00:12:23.721 fused_ordering(981) 00:12:23.721 fused_ordering(982) 00:12:23.721 fused_ordering(983) 00:12:23.721 fused_ordering(984) 00:12:23.721 fused_ordering(985) 00:12:23.721 fused_ordering(986) 00:12:23.721 fused_ordering(987) 00:12:23.721 fused_ordering(988) 00:12:23.721 fused_ordering(989) 00:12:23.721 fused_ordering(990) 00:12:23.721 fused_ordering(991) 00:12:23.721 fused_ordering(992) 00:12:23.721 fused_ordering(993) 00:12:23.721 fused_ordering(994) 00:12:23.721 fused_ordering(995) 00:12:23.721 fused_ordering(996) 00:12:23.721 fused_ordering(997) 00:12:23.721 fused_ordering(998) 00:12:23.721 fused_ordering(999) 00:12:23.721 fused_ordering(1000) 00:12:23.721 fused_ordering(1001) 00:12:23.721 fused_ordering(1002) 00:12:23.721 fused_ordering(1003) 00:12:23.721 fused_ordering(1004) 00:12:23.721 fused_ordering(1005) 00:12:23.721 fused_ordering(1006) 00:12:23.721 fused_ordering(1007) 00:12:23.721 fused_ordering(1008) 00:12:23.721 fused_ordering(1009) 00:12:23.721 fused_ordering(1010) 00:12:23.721 fused_ordering(1011) 00:12:23.721 fused_ordering(1012) 00:12:23.721 fused_ordering(1013) 00:12:23.721 fused_ordering(1014) 00:12:23.721 fused_ordering(1015) 00:12:23.721 fused_ordering(1016) 00:12:23.721 fused_ordering(1017) 00:12:23.721 fused_ordering(1018) 00:12:23.721 fused_ordering(1019) 00:12:23.721 fused_ordering(1020) 00:12:23.721 fused_ordering(1021) 00:12:23.721 fused_ordering(1022) 00:12:23.721 fused_ordering(1023) 00:12:23.721 11:19:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:23.721 11:19:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:23.721 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.721 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:23.721 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.722 rmmod nvme_tcp 00:12:23.722 rmmod nvme_fabrics 00:12:23.722 rmmod nvme_keyring 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1440294 ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 1440294 ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1440294' 00:12:23.722 killing process with pid 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 1440294 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.722 11:19:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.267 11:19:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.267 00:12:26.267 real 0m14.060s 00:12:26.267 user 0m7.263s 00:12:26.267 sys 0m7.435s 00:12:26.267 11:19:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:26.267 11:19:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.267 ************************************ 00:12:26.267 END TEST nvmf_fused_ordering 00:12:26.267 ************************************ 00:12:26.267 11:19:23 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:26.267 11:19:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:26.267 11:19:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:26.267 11:19:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.267 ************************************ 00:12:26.267 START TEST nvmf_delete_subsystem 00:12:26.267 ************************************ 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:26.267 * Looking for test storage... 00:12:26.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.267 11:19:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:34.415 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:34.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:34.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:34.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:12:34.415 00:12:34.415 --- 10.0.0.2 ping statistics --- 00:12:34.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.415 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:12:34.415 00:12:34.415 --- 10.0.0.1 ping statistics --- 00:12:34.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.415 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:34.415 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1445313 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1445313 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 1445313 ']' 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:34.416 11:19:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.416 [2024-06-10 11:19:31.560539] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:12:34.416 [2024-06-10 11:19:31.560585] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.416 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.676 [2024-06-10 11:19:31.647453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:34.676 [2024-06-10 11:19:31.729812] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.676 [2024-06-10 11:19:31.729881] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.676 [2024-06-10 11:19:31.729889] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.676 [2024-06-10 11:19:31.729895] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.676 [2024-06-10 11:19:31.729901] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.676 [2024-06-10 11:19:31.730074] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.676 [2024-06-10 11:19:31.730078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 [2024-06-10 11:19:32.441620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 [2024-06-10 11:19:32.457771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 NULL1 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 Delay0 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.264 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.526 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1445382 00:12:35.526 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:35.526 11:19:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:35.526 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.526 [2024-06-10 11:19:32.542396] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:37.438 11:19:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.438 11:19:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.438 11:19:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Write completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.700 starting I/O failed: -6 00:12:37.700 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 starting I/O failed: -6 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 [2024-06-10 11:19:34.830119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4500000c00 is same with the state(5) to be set 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Write completed with error (sct=0, sc=8) 00:12:37.701 Read completed with error (sct=0, sc=8) 00:12:38.643 [2024-06-10 11:19:35.805007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339550 is same with the state(5) to be set 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 [2024-06-10 11:19:35.830565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f450000c780 is same with the state(5) to be set 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Read completed with error (sct=0, sc=8) 00:12:38.643 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 [2024-06-10 11:19:35.830957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f450000bfe0 is same with the state(5) to be set 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 [2024-06-10 11:19:35.832831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359e60 is same with the state(5) to be set 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Write completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 Read completed with error (sct=0, sc=8) 00:12:38.644 [2024-06-10 11:19:35.833157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235a220 is same with the state(5) to be set 00:12:38.644 Initializing NVMe Controllers 00:12:38.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:38.644 Controller IO queue size 128, less than required. 00:12:38.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:38.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:38.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:38.644 Initialization complete. Launching workers. 00:12:38.644 ======================================================== 00:12:38.644 Latency(us) 00:12:38.644 Device Information : IOPS MiB/s Average min max 00:12:38.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.73 0.09 888987.95 334.29 1008498.76 00:12:38.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.90 0.07 1056404.72 254.44 2001043.15 00:12:38.644 ======================================================== 00:12:38.644 Total : 342.63 0.17 962719.46 254.44 2001043.15 00:12:38.644 00:12:38.644 [2024-06-10 11:19:35.833501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2339550 (9): Bad file descriptor 00:12:38.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:38.644 11:19:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.644 11:19:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:38.644 11:19:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1445382 00:12:38.644 11:19:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1445382 00:12:39.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1445382) - No such process 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1445382 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1445382 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1445382 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.213 [2024-06-10 11:19:36.365495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1446119 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:39.213 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:39.213 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.473 [2024-06-10 11:19:36.443558] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:39.733 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:39.733 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:39.733 11:19:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:40.304 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:40.304 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:40.304 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:40.873 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:40.873 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:40.873 11:19:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:41.446 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:41.446 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:41.446 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:41.706 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:41.706 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:41.706 11:19:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.276 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:42.276 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:42.276 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.536 Initializing NVMe Controllers 00:12:42.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.537 Controller IO queue size 128, less than required. 00:12:42.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:42.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:42.537 Initialization complete. Launching workers. 00:12:42.537 ======================================================== 00:12:42.537 Latency(us) 00:12:42.537 Device Information : IOPS MiB/s Average min max 00:12:42.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002413.34 1000159.29 1042472.38 00:12:42.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003531.13 1000337.39 1041609.35 00:12:42.537 ======================================================== 00:12:42.537 Total : 256.00 0.12 1002972.23 1000159.29 1042472.38 00:12:42.537 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1446119 00:12:42.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1446119) - No such process 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1446119 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.798 rmmod nvme_tcp 00:12:42.798 rmmod nvme_fabrics 00:12:42.798 rmmod nvme_keyring 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1445313 ']' 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1445313 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 1445313 ']' 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 1445313 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:42.798 11:19:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1445313 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1445313' 00:12:43.058 killing process with pid 1445313 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 1445313 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 1445313 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.058 11:19:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.599 11:19:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.599 00:12:45.599 real 0m19.188s 00:12:45.599 user 0m31.597s 00:12:45.599 sys 0m7.131s 00:12:45.599 11:19:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:45.599 11:19:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.599 ************************************ 00:12:45.599 END TEST nvmf_delete_subsystem 00:12:45.599 ************************************ 00:12:45.599 11:19:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.599 11:19:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:45.599 11:19:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:45.599 11:19:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.599 ************************************ 00:12:45.599 START TEST nvmf_ns_masking 00:12:45.599 ************************************ 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.599 * Looking for test storage... 00:12:45.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.599 11:19:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=4d0fea3c-43c6-4cea-b9e9-ce9542a9eb42 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.600 11:19:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:53.891 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.891 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:53.892 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:53.892 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:53.892 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:12:53.892 00:12:53.892 --- 10.0.0.2 ping statistics --- 00:12:53.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.892 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:53.892 00:12:53.892 --- 10.0.0.1 ping statistics --- 00:12:53.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.892 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1451109 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1451109 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1451109 ']' 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:53.892 11:19:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.892 [2024-06-10 11:19:50.786977] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:12:53.892 [2024-06-10 11:19:50.787044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.892 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.892 [2024-06-10 11:19:50.880773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.892 [2024-06-10 11:19:50.975792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.892 [2024-06-10 11:19:50.975866] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.892 [2024-06-10 11:19:50.975874] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.892 [2024-06-10 11:19:50.975880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.892 [2024-06-10 11:19:50.975886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.892 [2024-06-10 11:19:50.976035] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.892 [2024-06-10 11:19:50.976187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.892 [2024-06-10 11:19:50.976348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.892 [2024-06-10 11:19:50.976348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.463 11:19:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:54.463 11:19:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:12:54.463 11:19:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.463 11:19:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:54.463 11:19:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.723 11:19:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.723 11:19:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.723 [2024-06-10 11:19:51.864989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.723 11:19:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:54.723 11:19:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:54.723 11:19:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.983 Malloc1 00:12:54.983 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.243 Malloc2 00:12:55.243 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.502 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:55.762 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.762 [2024-06-10 11:19:52.910461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.762 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:55.762 11:19:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d0fea3c-43c6-4cea-b9e9-ce9542a9eb42 -a 10.0.0.2 -s 4420 -i 4 00:12:56.022 11:19:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.022 11:19:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:56.022 11:19:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.022 11:19:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:56.022 11:19:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:57.937 [ 0]:0x1 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.937 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a94a123390884596a5ee8d66a6baa674 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a94a123390884596a5ee8d66a6baa674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:58.244 [ 0]:0x1 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a94a123390884596a5ee8d66a6baa674 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a94a123390884596a5ee8d66a6baa674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:58.244 [ 1]:0x2 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.244 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:58.505 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:12:58.505 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.505 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:58.505 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.505 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.766 11:19:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:59.026 11:19:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:59.026 11:19:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d0fea3c-43c6-4cea-b9e9-ce9542a9eb42 -a 10.0.0.2 -s 4420 -i 4 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:12:59.287 11:19:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.202 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:01.464 [ 0]:0x2 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.464 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:01.725 [ 0]:0x1 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a94a123390884596a5ee8d66a6baa674 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a94a123390884596a5ee8d66a6baa674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:01.725 [ 1]:0x2 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.725 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:01.986 11:19:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:01.986 [ 0]:0x2 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.986 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d0fea3c-43c6-4cea-b9e9-ce9542a9eb42 -a 10.0.0.2 -s 4420 -i 4 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:02.247 11:19:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:04.792 [ 0]:0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a94a123390884596a5ee8d66a6baa674 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a94a123390884596a5ee8d66a6baa674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:04.792 [ 1]:0x2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:04.792 [ 0]:0x2 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:04.792 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:04.793 11:20:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.062 [2024-06-10 11:20:02.075453] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:05.062 request: 00:13:05.062 { 00:13:05.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.062 "nsid": 2, 00:13:05.062 "host": "nqn.2016-06.io.spdk:host1", 00:13:05.062 "method": "nvmf_ns_remove_host", 00:13:05.062 "req_id": 1 00:13:05.062 } 00:13:05.062 Got JSON-RPC error response 00:13:05.062 response: 00:13:05.062 { 00:13:05.062 "code": -32602, 00:13:05.062 "message": "Invalid parameters" 00:13:05.062 } 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:05.062 [ 0]:0x2 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1ccd240373a447f7ba439b7cbc2f6de4 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1ccd240373a447f7ba439b7cbc2f6de4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.062 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.322 rmmod nvme_tcp 00:13:05.322 rmmod nvme_fabrics 00:13:05.322 rmmod nvme_keyring 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1451109 ']' 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1451109 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1451109 ']' 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1451109 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:05.322 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1451109 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1451109' 00:13:05.582 killing process with pid 1451109 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1451109 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1451109 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.582 11:20:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.122 11:20:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.122 00:13:08.122 real 0m22.482s 00:13:08.122 user 0m52.396s 00:13:08.122 sys 0m7.634s 00:13:08.122 11:20:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:08.122 11:20:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.122 ************************************ 00:13:08.122 END TEST nvmf_ns_masking 00:13:08.122 ************************************ 00:13:08.122 11:20:04 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:08.122 11:20:04 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:08.122 11:20:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:08.122 11:20:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:08.122 11:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.122 ************************************ 00:13:08.122 START TEST nvmf_nvme_cli 00:13:08.122 ************************************ 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:08.122 * Looking for test storage... 00:13:08.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.122 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:08.123 11:20:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.123 11:20:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.264 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:16.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:16.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:16.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:16.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.265 11:20:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:13:16.265 00:13:16.265 --- 10.0.0.2 ping statistics --- 00:13:16.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.265 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:13:16.265 00:13:16.265 --- 10.0.0.1 ping statistics --- 00:13:16.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.265 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1457660 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1457660 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1457660 ']' 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:16.265 11:20:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.265 [2024-06-10 11:20:13.325298] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:13:16.265 [2024-06-10 11:20:13.325365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.265 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.265 [2024-06-10 11:20:13.399888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.527 [2024-06-10 11:20:13.494499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.527 [2024-06-10 11:20:13.494549] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.527 [2024-06-10 11:20:13.494557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.527 [2024-06-10 11:20:13.494564] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.527 [2024-06-10 11:20:13.494570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.527 [2024-06-10 11:20:13.494713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.527 [2024-06-10 11:20:13.494856] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.527 [2024-06-10 11:20:13.494995] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.527 [2024-06-10 11:20:13.494995] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 [2024-06-10 11:20:14.236574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 Malloc0 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 Malloc1 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 [2024-06-10 11:20:14.322801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.159 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.160 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.160 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.160 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.160 11:20:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.160 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:13:17.419 00:13:17.419 Discovery Log Number of Records 2, Generation counter 2 00:13:17.419 =====Discovery Log Entry 0====== 00:13:17.419 trtype: tcp 00:13:17.419 adrfam: ipv4 00:13:17.419 subtype: current discovery subsystem 00:13:17.419 treq: not required 00:13:17.419 portid: 0 00:13:17.419 trsvcid: 4420 00:13:17.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:17.419 traddr: 10.0.0.2 00:13:17.419 eflags: explicit discovery connections, duplicate discovery information 00:13:17.419 sectype: none 00:13:17.419 =====Discovery Log Entry 1====== 00:13:17.419 trtype: tcp 00:13:17.419 adrfam: ipv4 00:13:17.419 subtype: nvme subsystem 00:13:17.419 treq: not required 00:13:17.419 portid: 0 00:13:17.419 trsvcid: 4420 00:13:17.419 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:17.419 traddr: 10.0.0.2 00:13:17.419 eflags: none 00:13:17.419 sectype: none 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:17.419 11:20:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:18.800 11:20:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:20.794 /dev/nvme0n1 ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:20.794 11:20:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:20.794 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.057 rmmod nvme_tcp 00:13:21.057 rmmod nvme_fabrics 00:13:21.057 rmmod nvme_keyring 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1457660 ']' 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1457660 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1457660 ']' 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1457660 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1457660 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1457660' 00:13:21.057 killing process with pid 1457660 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1457660 00:13:21.057 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1457660 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.318 11:20:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.233 11:20:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.495 00:13:23.495 real 0m15.581s 00:13:23.495 user 0m21.961s 00:13:23.495 sys 0m6.697s 00:13:23.495 11:20:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:23.495 11:20:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 ************************************ 00:13:23.495 END TEST nvmf_nvme_cli 00:13:23.495 ************************************ 00:13:23.495 11:20:20 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:23.495 11:20:20 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.495 11:20:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:23.495 11:20:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:23.495 11:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 ************************************ 00:13:23.495 START TEST nvmf_vfio_user 00:13:23.495 ************************************ 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.495 * Looking for test storage... 00:13:23.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1459066 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1459066' 00:13:23.495 Process pid: 1459066 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1459066 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1459066 ']' 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:23.495 11:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.755 [2024-06-10 11:20:20.728699] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:13:23.755 [2024-06-10 11:20:20.728774] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.755 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.755 [2024-06-10 11:20:20.816069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.755 [2024-06-10 11:20:20.886804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.755 [2024-06-10 11:20:20.886846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.755 [2024-06-10 11:20:20.886854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.755 [2024-06-10 11:20:20.886860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.755 [2024-06-10 11:20:20.886865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.755 [2024-06-10 11:20:20.886930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.755 [2024-06-10 11:20:20.887193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.755 [2024-06-10 11:20:20.887346] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.755 [2024-06-10 11:20:20.887347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.695 11:20:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:24.695 11:20:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:24.695 11:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:25.635 11:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.895 Malloc1 00:13:25.895 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:26.155 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:26.416 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.416 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.416 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.416 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.675 Malloc2 00:13:26.675 11:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:26.935 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:27.196 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:27.459 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:27.459 [2024-06-10 11:20:24.496288] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:13:27.459 [2024-06-10 11:20:24.496356] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459760 ] 00:13:27.459 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.459 [2024-06-10 11:20:24.528614] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:27.459 [2024-06-10 11:20:24.534264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.459 [2024-06-10 11:20:24.534285] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f988feea000 00:13:27.459 [2024-06-10 11:20:24.535282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.536277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.537280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.538283] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.539288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.540300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.541299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.542308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.459 [2024-06-10 11:20:24.545829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.459 [2024-06-10 11:20:24.545842] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f988fedf000 00:13:27.459 [2024-06-10 11:20:24.547069] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.459 [2024-06-10 11:20:24.562904] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:27.459 [2024-06-10 11:20:24.562927] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:27.459 [2024-06-10 11:20:24.568479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.459 [2024-06-10 11:20:24.568524] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:27.459 [2024-06-10 11:20:24.568602] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:27.459 [2024-06-10 11:20:24.568620] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:27.459 [2024-06-10 11:20:24.568627] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:27.459 [2024-06-10 11:20:24.569484] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:27.459 [2024-06-10 11:20:24.569494] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:27.459 [2024-06-10 11:20:24.569500] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:27.459 [2024-06-10 11:20:24.570487] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.459 [2024-06-10 11:20:24.570496] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:27.459 [2024-06-10 11:20:24.570502] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.571495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:27.459 [2024-06-10 11:20:24.571504] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.572501] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:27.459 [2024-06-10 11:20:24.572509] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:27.459 [2024-06-10 11:20:24.572513] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.572519] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.572625] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:27.459 [2024-06-10 11:20:24.572629] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.572633] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:27.459 [2024-06-10 11:20:24.573515] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:27.459 [2024-06-10 11:20:24.574519] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:27.459 [2024-06-10 11:20:24.575531] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.459 [2024-06-10 11:20:24.576531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.459 [2024-06-10 11:20:24.576590] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:27.459 [2024-06-10 11:20:24.577538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:27.459 [2024-06-10 11:20:24.577545] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:27.459 [2024-06-10 11:20:24.577550] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:27.459 [2024-06-10 11:20:24.577572] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:27.459 [2024-06-10 11:20:24.577579] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:27.459 [2024-06-10 11:20:24.577593] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.459 [2024-06-10 11:20:24.577598] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.459 [2024-06-10 11:20:24.577611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.459 [2024-06-10 11:20:24.577651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:27.459 [2024-06-10 11:20:24.577660] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:27.459 [2024-06-10 11:20:24.577664] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:27.459 [2024-06-10 11:20:24.577668] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:27.459 [2024-06-10 11:20:24.577672] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:27.460 [2024-06-10 11:20:24.577677] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:27.460 [2024-06-10 11:20:24.577681] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:27.460 [2024-06-10 11:20:24.577685] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577692] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.577715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.577729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.460 [2024-06-10 11:20:24.577736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.460 [2024-06-10 11:20:24.577744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.460 [2024-06-10 11:20:24.577752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.460 [2024-06-10 11:20:24.577756] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577762] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.577779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.577786] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:27.460 [2024-06-10 11:20:24.577792] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577798] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577804] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.577831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.577877] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577885] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577892] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:27.460 [2024-06-10 11:20:24.577896] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:27.460 [2024-06-10 11:20:24.577902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.577914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.577922] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:27.460 [2024-06-10 11:20:24.577935] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577942] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577948] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.460 [2024-06-10 11:20:24.577952] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.460 [2024-06-10 11:20:24.577958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.577975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.577986] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577993] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.577999] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.460 [2024-06-10 11:20:24.578003] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.460 [2024-06-10 11:20:24.578009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.578033] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578039] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578047] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578053] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578057] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578062] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:27.460 [2024-06-10 11:20:24.578066] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:27.460 [2024-06-10 11:20:24.578070] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:27.460 [2024-06-10 11:20:24.578090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.578113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.578133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.578159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:27.460 [2024-06-10 11:20:24.578177] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:27.460 [2024-06-10 11:20:24.578182] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:27.460 [2024-06-10 11:20:24.578185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:27.460 [2024-06-10 11:20:24.578188] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:27.460 [2024-06-10 11:20:24.578194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:27.460 [2024-06-10 11:20:24.578202] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:27.460 [2024-06-10 11:20:24.578206] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:27.460 [2024-06-10 11:20:24.578211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578218] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:27.460 [2024-06-10 11:20:24.578222] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.460 [2024-06-10 11:20:24.578227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.460 [2024-06-10 11:20:24.578234] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:27.461 [2024-06-10 11:20:24.578238] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:27.461 [2024-06-10 11:20:24.578245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:27.461 [2024-06-10 11:20:24.578252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:27.461 [2024-06-10 11:20:24.578264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:27.461 [2024-06-10 11:20:24.578272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:27.461 [2024-06-10 11:20:24.578280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:27.461 ===================================================== 00:13:27.461 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.461 ===================================================== 00:13:27.461 Controller Capabilities/Features 00:13:27.461 ================================ 00:13:27.461 Vendor ID: 4e58 00:13:27.461 Subsystem Vendor ID: 4e58 00:13:27.461 Serial Number: SPDK1 00:13:27.461 Model Number: SPDK bdev Controller 00:13:27.461 Firmware Version: 24.09 00:13:27.461 Recommended Arb Burst: 6 00:13:27.461 IEEE OUI Identifier: 8d 6b 50 00:13:27.461 Multi-path I/O 00:13:27.461 May have multiple subsystem ports: Yes 00:13:27.461 May have multiple controllers: Yes 00:13:27.461 Associated with SR-IOV VF: No 00:13:27.461 Max Data Transfer Size: 131072 00:13:27.461 Max Number of Namespaces: 32 00:13:27.461 Max Number of I/O Queues: 127 00:13:27.461 NVMe Specification Version (VS): 1.3 00:13:27.461 NVMe Specification Version (Identify): 1.3 00:13:27.461 Maximum Queue Entries: 256 00:13:27.461 Contiguous Queues Required: Yes 00:13:27.461 Arbitration Mechanisms Supported 00:13:27.461 Weighted Round Robin: Not Supported 00:13:27.461 Vendor Specific: Not Supported 00:13:27.461 Reset Timeout: 15000 ms 00:13:27.461 Doorbell Stride: 4 bytes 00:13:27.461 NVM Subsystem Reset: Not Supported 00:13:27.461 Command Sets Supported 00:13:27.461 NVM Command Set: Supported 00:13:27.461 Boot Partition: Not Supported 00:13:27.461 Memory Page Size Minimum: 4096 bytes 00:13:27.461 Memory Page Size Maximum: 4096 bytes 00:13:27.461 Persistent Memory Region: Not Supported 00:13:27.461 Optional Asynchronous Events Supported 00:13:27.461 Namespace Attribute Notices: Supported 00:13:27.461 Firmware Activation Notices: Not Supported 00:13:27.461 ANA Change Notices: Not Supported 00:13:27.461 PLE Aggregate Log Change Notices: Not Supported 00:13:27.461 LBA Status Info Alert Notices: Not Supported 00:13:27.461 EGE Aggregate Log Change Notices: Not Supported 00:13:27.461 Normal NVM Subsystem Shutdown event: Not Supported 00:13:27.461 Zone Descriptor Change Notices: Not Supported 00:13:27.461 Discovery Log Change Notices: Not Supported 00:13:27.461 Controller Attributes 00:13:27.461 128-bit Host Identifier: Supported 00:13:27.461 Non-Operational Permissive Mode: Not Supported 00:13:27.461 NVM Sets: Not Supported 00:13:27.461 Read Recovery Levels: Not Supported 00:13:27.461 Endurance Groups: Not Supported 00:13:27.461 Predictable Latency Mode: Not Supported 00:13:27.461 Traffic Based Keep ALive: Not Supported 00:13:27.461 Namespace Granularity: Not Supported 00:13:27.461 SQ Associations: Not Supported 00:13:27.461 UUID List: Not Supported 00:13:27.461 Multi-Domain Subsystem: Not Supported 00:13:27.461 Fixed Capacity Management: Not Supported 00:13:27.461 Variable Capacity Management: Not Supported 00:13:27.461 Delete Endurance Group: Not Supported 00:13:27.461 Delete NVM Set: Not Supported 00:13:27.461 Extended LBA Formats Supported: Not Supported 00:13:27.461 Flexible Data Placement Supported: Not Supported 00:13:27.461 00:13:27.461 Controller Memory Buffer Support 00:13:27.461 ================================ 00:13:27.461 Supported: No 00:13:27.461 00:13:27.461 Persistent Memory Region Support 00:13:27.461 ================================ 00:13:27.461 Supported: No 00:13:27.461 00:13:27.461 Admin Command Set Attributes 00:13:27.461 ============================ 00:13:27.461 Security Send/Receive: Not Supported 00:13:27.461 Format NVM: Not Supported 00:13:27.461 Firmware Activate/Download: Not Supported 00:13:27.461 Namespace Management: Not Supported 00:13:27.461 Device Self-Test: Not Supported 00:13:27.461 Directives: Not Supported 00:13:27.461 NVMe-MI: Not Supported 00:13:27.461 Virtualization Management: Not Supported 00:13:27.461 Doorbell Buffer Config: Not Supported 00:13:27.461 Get LBA Status Capability: Not Supported 00:13:27.461 Command & Feature Lockdown Capability: Not Supported 00:13:27.461 Abort Command Limit: 4 00:13:27.461 Async Event Request Limit: 4 00:13:27.461 Number of Firmware Slots: N/A 00:13:27.461 Firmware Slot 1 Read-Only: N/A 00:13:27.461 Firmware Activation Without Reset: N/A 00:13:27.461 Multiple Update Detection Support: N/A 00:13:27.461 Firmware Update Granularity: No Information Provided 00:13:27.461 Per-Namespace SMART Log: No 00:13:27.461 Asymmetric Namespace Access Log Page: Not Supported 00:13:27.461 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:27.461 Command Effects Log Page: Supported 00:13:27.461 Get Log Page Extended Data: Supported 00:13:27.461 Telemetry Log Pages: Not Supported 00:13:27.461 Persistent Event Log Pages: Not Supported 00:13:27.461 Supported Log Pages Log Page: May Support 00:13:27.461 Commands Supported & Effects Log Page: Not Supported 00:13:27.461 Feature Identifiers & Effects Log Page:May Support 00:13:27.461 NVMe-MI Commands & Effects Log Page: May Support 00:13:27.461 Data Area 4 for Telemetry Log: Not Supported 00:13:27.461 Error Log Page Entries Supported: 128 00:13:27.461 Keep Alive: Supported 00:13:27.461 Keep Alive Granularity: 10000 ms 00:13:27.461 00:13:27.461 NVM Command Set Attributes 00:13:27.461 ========================== 00:13:27.461 Submission Queue Entry Size 00:13:27.461 Max: 64 00:13:27.461 Min: 64 00:13:27.461 Completion Queue Entry Size 00:13:27.461 Max: 16 00:13:27.461 Min: 16 00:13:27.461 Number of Namespaces: 32 00:13:27.461 Compare Command: Supported 00:13:27.461 Write Uncorrectable Command: Not Supported 00:13:27.461 Dataset Management Command: Supported 00:13:27.461 Write Zeroes Command: Supported 00:13:27.461 Set Features Save Field: Not Supported 00:13:27.461 Reservations: Not Supported 00:13:27.461 Timestamp: Not Supported 00:13:27.461 Copy: Supported 00:13:27.461 Volatile Write Cache: Present 00:13:27.461 Atomic Write Unit (Normal): 1 00:13:27.461 Atomic Write Unit (PFail): 1 00:13:27.461 Atomic Compare & Write Unit: 1 00:13:27.461 Fused Compare & Write: Supported 00:13:27.461 Scatter-Gather List 00:13:27.461 SGL Command Set: Supported (Dword aligned) 00:13:27.461 SGL Keyed: Not Supported 00:13:27.461 SGL Bit Bucket Descriptor: Not Supported 00:13:27.461 SGL Metadata Pointer: Not Supported 00:13:27.461 Oversized SGL: Not Supported 00:13:27.461 SGL Metadata Address: Not Supported 00:13:27.461 SGL Offset: Not Supported 00:13:27.461 Transport SGL Data Block: Not Supported 00:13:27.461 Replay Protected Memory Block: Not Supported 00:13:27.461 00:13:27.461 Firmware Slot Information 00:13:27.461 ========================= 00:13:27.461 Active slot: 1 00:13:27.461 Slot 1 Firmware Revision: 24.09 00:13:27.461 00:13:27.461 00:13:27.461 Commands Supported and Effects 00:13:27.461 ============================== 00:13:27.461 Admin Commands 00:13:27.461 -------------- 00:13:27.461 Get Log Page (02h): Supported 00:13:27.461 Identify (06h): Supported 00:13:27.461 Abort (08h): Supported 00:13:27.461 Set Features (09h): Supported 00:13:27.461 Get Features (0Ah): Supported 00:13:27.461 Asynchronous Event Request (0Ch): Supported 00:13:27.461 Keep Alive (18h): Supported 00:13:27.461 I/O Commands 00:13:27.461 ------------ 00:13:27.462 Flush (00h): Supported LBA-Change 00:13:27.462 Write (01h): Supported LBA-Change 00:13:27.462 Read (02h): Supported 00:13:27.462 Compare (05h): Supported 00:13:27.462 Write Zeroes (08h): Supported LBA-Change 00:13:27.462 Dataset Management (09h): Supported LBA-Change 00:13:27.462 Copy (19h): Supported LBA-Change 00:13:27.462 Unknown (79h): Supported LBA-Change 00:13:27.462 Unknown (7Ah): Supported 00:13:27.462 00:13:27.462 Error Log 00:13:27.462 ========= 00:13:27.462 00:13:27.462 Arbitration 00:13:27.462 =========== 00:13:27.462 Arbitration Burst: 1 00:13:27.462 00:13:27.462 Power Management 00:13:27.462 ================ 00:13:27.462 Number of Power States: 1 00:13:27.462 Current Power State: Power State #0 00:13:27.462 Power State #0: 00:13:27.462 Max Power: 0.00 W 00:13:27.462 Non-Operational State: Operational 00:13:27.462 Entry Latency: Not Reported 00:13:27.462 Exit Latency: Not Reported 00:13:27.462 Relative Read Throughput: 0 00:13:27.462 Relative Read Latency: 0 00:13:27.462 Relative Write Throughput: 0 00:13:27.462 Relative Write Latency: 0 00:13:27.462 Idle Power: Not Reported 00:13:27.462 Active Power: Not Reported 00:13:27.462 Non-Operational Permissive Mode: Not Supported 00:13:27.462 00:13:27.462 Health Information 00:13:27.462 ================== 00:13:27.462 Critical Warnings: 00:13:27.462 Available Spare Space: OK 00:13:27.462 Temperature: OK 00:13:27.462 Device Reliability: OK 00:13:27.462 Read Only: No 00:13:27.462 Volatile Memory Backup: OK 00:13:27.462 Current Temperature: 0 Kelvin (-2[2024-06-10 11:20:24.578373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:27.462 [2024-06-10 11:20:24.578385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:27.462 [2024-06-10 11:20:24.578407] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:27.462 [2024-06-10 11:20:24.578416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.462 [2024-06-10 11:20:24.578422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.462 [2024-06-10 11:20:24.578427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.462 [2024-06-10 11:20:24.578433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.462 [2024-06-10 11:20:24.578542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.462 [2024-06-10 11:20:24.578551] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:27.462 [2024-06-10 11:20:24.579549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.462 [2024-06-10 11:20:24.579595] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:27.462 [2024-06-10 11:20:24.579601] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:27.462 [2024-06-10 11:20:24.580555] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:27.462 [2024-06-10 11:20:24.580565] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:27.462 [2024-06-10 11:20:24.580619] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:27.462 [2024-06-10 11:20:24.582583] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.462 73 Celsius) 00:13:27.462 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:27.462 Available Spare: 0% 00:13:27.462 Available Spare Threshold: 0% 00:13:27.462 Life Percentage Used: 0% 00:13:27.462 Data Units Read: 0 00:13:27.462 Data Units Written: 0 00:13:27.462 Host Read Commands: 0 00:13:27.462 Host Write Commands: 0 00:13:27.462 Controller Busy Time: 0 minutes 00:13:27.462 Power Cycles: 0 00:13:27.462 Power On Hours: 0 hours 00:13:27.462 Unsafe Shutdowns: 0 00:13:27.462 Unrecoverable Media Errors: 0 00:13:27.462 Lifetime Error Log Entries: 0 00:13:27.462 Warning Temperature Time: 0 minutes 00:13:27.462 Critical Temperature Time: 0 minutes 00:13:27.462 00:13:27.462 Number of Queues 00:13:27.462 ================ 00:13:27.462 Number of I/O Submission Queues: 127 00:13:27.462 Number of I/O Completion Queues: 127 00:13:27.462 00:13:27.462 Active Namespaces 00:13:27.462 ================= 00:13:27.462 Namespace ID:1 00:13:27.462 Error Recovery Timeout: Unlimited 00:13:27.462 Command Set Identifier: NVM (00h) 00:13:27.462 Deallocate: Supported 00:13:27.462 Deallocated/Unwritten Error: Not Supported 00:13:27.462 Deallocated Read Value: Unknown 00:13:27.462 Deallocate in Write Zeroes: Not Supported 00:13:27.462 Deallocated Guard Field: 0xFFFF 00:13:27.462 Flush: Supported 00:13:27.462 Reservation: Supported 00:13:27.462 Namespace Sharing Capabilities: Multiple Controllers 00:13:27.462 Size (in LBAs): 131072 (0GiB) 00:13:27.462 Capacity (in LBAs): 131072 (0GiB) 00:13:27.462 Utilization (in LBAs): 131072 (0GiB) 00:13:27.462 NGUID: D023CE3E0452434BBD51DB2BF41D331B 00:13:27.462 UUID: d023ce3e-0452-434b-bd51-db2bf41d331b 00:13:27.462 Thin Provisioning: Not Supported 00:13:27.462 Per-NS Atomic Units: Yes 00:13:27.462 Atomic Boundary Size (Normal): 0 00:13:27.462 Atomic Boundary Size (PFail): 0 00:13:27.462 Atomic Boundary Offset: 0 00:13:27.462 Maximum Single Source Range Length: 65535 00:13:27.462 Maximum Copy Length: 65535 00:13:27.462 Maximum Source Range Count: 1 00:13:27.462 NGUID/EUI64 Never Reused: No 00:13:27.462 Namespace Write Protected: No 00:13:27.462 Number of LBA Formats: 1 00:13:27.462 Current LBA Format: LBA Format #00 00:13:27.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:27.462 00:13:27.462 11:20:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:27.462 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.722 [2024-06-10 11:20:24.773531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.002 Initializing NVMe Controllers 00:13:33.002 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.002 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:33.002 Initialization complete. Launching workers. 00:13:33.002 ======================================================== 00:13:33.002 Latency(us) 00:13:33.002 Device Information : IOPS MiB/s Average min max 00:13:33.003 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39887.74 155.81 3208.59 1111.24 7421.37 00:13:33.003 ======================================================== 00:13:33.003 Total : 39887.74 155.81 3208.59 1111.24 7421.37 00:13:33.003 00:13:33.003 [2024-06-10 11:20:29.789869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.003 11:20:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:33.003 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.003 [2024-06-10 11:20:29.988845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.284 Initializing NVMe Controllers 00:13:38.284 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:38.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:38.284 Initialization complete. Launching workers. 00:13:38.284 ======================================================== 00:13:38.284 Latency(us) 00:13:38.284 Device Information : IOPS MiB/s Average min max 00:13:38.284 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.04 62.71 7978.89 6981.53 8980.59 00:13:38.284 ======================================================== 00:13:38.284 Total : 16053.04 62.71 7978.89 6981.53 8980.59 00:13:38.284 00:13:38.284 [2024-06-10 11:20:35.030257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.284 11:20:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:38.284 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.284 [2024-06-10 11:20:35.260321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.602 [2024-06-10 11:20:40.336017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.602 Initializing NVMe Controllers 00:13:43.602 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.602 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.602 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:43.602 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:43.602 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:43.602 Initialization complete. Launching workers. 00:13:43.602 Starting thread on core 2 00:13:43.602 Starting thread on core 3 00:13:43.602 Starting thread on core 1 00:13:43.602 11:20:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:43.602 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.602 [2024-06-10 11:20:40.613170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.897 [2024-06-10 11:20:43.665234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.897 Initializing NVMe Controllers 00:13:46.897 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.897 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.897 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:46.897 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:46.897 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:46.897 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:46.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:46.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:46.897 Initialization complete. Launching workers. 00:13:46.897 Starting thread on core 1 with urgent priority queue 00:13:46.897 Starting thread on core 2 with urgent priority queue 00:13:46.897 Starting thread on core 3 with urgent priority queue 00:13:46.897 Starting thread on core 0 with urgent priority queue 00:13:46.898 SPDK bdev Controller (SPDK1 ) core 0: 14611.00 IO/s 6.84 secs/100000 ios 00:13:46.898 SPDK bdev Controller (SPDK1 ) core 1: 10300.33 IO/s 9.71 secs/100000 ios 00:13:46.898 SPDK bdev Controller (SPDK1 ) core 2: 11564.00 IO/s 8.65 secs/100000 ios 00:13:46.898 SPDK bdev Controller (SPDK1 ) core 3: 11156.00 IO/s 8.96 secs/100000 ios 00:13:46.898 ======================================================== 00:13:46.898 00:13:46.898 11:20:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.898 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.898 [2024-06-10 11:20:43.924495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.898 Initializing NVMe Controllers 00:13:46.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.898 Namespace ID: 1 size: 0GB 00:13:46.898 Initialization complete. 00:13:46.898 INFO: using host memory buffer for IO 00:13:46.898 Hello world! 00:13:46.898 [2024-06-10 11:20:43.957665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.898 11:20:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.898 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.157 [2024-06-10 11:20:44.208418] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.098 Initializing NVMe Controllers 00:13:48.098 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.098 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.098 Initialization complete. Launching workers. 00:13:48.098 submit (in ns) avg, min, max = 8119.6, 3639.2, 3999186.2 00:13:48.098 complete (in ns) avg, min, max = 16955.5, 2200.8, 4029345.4 00:13:48.098 00:13:48.098 Submit histogram 00:13:48.098 ================ 00:13:48.098 Range in us Cumulative Count 00:13:48.098 3.618 - 3.643: 0.0362% ( 7) 00:13:48.098 3.643 - 3.668: 1.9522% ( 371) 00:13:48.098 3.668 - 3.692: 9.0069% ( 1366) 00:13:48.098 3.692 - 3.717: 18.8039% ( 1897) 00:13:48.098 3.717 - 3.742: 29.5305% ( 2077) 00:13:48.099 3.742 - 3.766: 40.0558% ( 2038) 00:13:48.099 3.766 - 3.791: 52.3679% ( 2384) 00:13:48.099 3.791 - 3.815: 67.8201% ( 2992) 00:13:48.099 3.815 - 3.840: 82.8074% ( 2902) 00:13:48.099 3.840 - 3.865: 92.9092% ( 1956) 00:13:48.099 3.865 - 3.889: 97.3558% ( 861) 00:13:48.099 3.889 - 3.914: 99.0187% ( 322) 00:13:48.099 3.914 - 3.938: 99.5249% ( 98) 00:13:48.099 3.938 - 3.963: 99.5920% ( 13) 00:13:48.099 3.963 - 3.988: 99.6127% ( 4) 00:13:48.099 4.234 - 4.258: 99.6178% ( 1) 00:13:48.099 4.480 - 4.505: 99.6230% ( 1) 00:13:48.099 4.603 - 4.628: 99.6282% ( 1) 00:13:48.099 5.563 - 5.588: 99.6333% ( 1) 00:13:48.099 5.612 - 5.637: 99.6385% ( 1) 00:13:48.099 5.637 - 5.662: 99.6437% ( 1) 00:13:48.099 5.686 - 5.711: 99.6488% ( 1) 00:13:48.099 5.883 - 5.908: 99.6540% ( 1) 00:13:48.099 5.982 - 6.006: 99.6591% ( 1) 00:13:48.099 6.006 - 6.031: 99.6643% ( 1) 00:13:48.099 6.080 - 6.105: 99.6746% ( 2) 00:13:48.099 6.129 - 6.154: 99.6798% ( 1) 00:13:48.099 6.154 - 6.178: 99.6901% ( 2) 00:13:48.099 6.178 - 6.203: 99.7005% ( 2) 00:13:48.099 6.203 - 6.228: 99.7056% ( 1) 00:13:48.099 6.228 - 6.252: 99.7108% ( 1) 00:13:48.099 6.277 - 6.302: 99.7160% ( 1) 00:13:48.099 6.302 - 6.351: 99.7211% ( 1) 00:13:48.099 6.351 - 6.400: 99.7314% ( 2) 00:13:48.099 6.400 - 6.449: 99.7366% ( 1) 00:13:48.099 6.449 - 6.498: 99.7418% ( 1) 00:13:48.099 6.548 - 6.597: 99.7573% ( 3) 00:13:48.099 6.597 - 6.646: 99.7624% ( 1) 00:13:48.099 6.695 - 6.745: 99.7728% ( 2) 00:13:48.099 6.745 - 6.794: 99.7883% ( 3) 00:13:48.099 6.794 - 6.843: 99.7934% ( 1) 00:13:48.099 6.843 - 6.892: 99.7986% ( 1) 00:13:48.099 6.892 - 6.942: 99.8037% ( 1) 00:13:48.099 6.991 - 7.040: 99.8141% ( 2) 00:13:48.099 7.040 - 7.089: 99.8296% ( 3) 00:13:48.099 7.089 - 7.138: 99.8399% ( 2) 00:13:48.099 7.138 - 7.188: 99.8502% ( 2) 00:13:48.099 7.188 - 7.237: 99.8554% ( 1) 00:13:48.099 7.237 - 7.286: 99.8657% ( 2) 00:13:48.099 7.286 - 7.335: 99.8709% ( 1) 00:13:48.099 7.729 - 7.778: 99.8761% ( 1) 00:13:48.099 8.418 - 8.468: 99.8812% ( 1) 00:13:48.099 10.043 - 10.092: 99.8864% ( 1) 00:13:48.099 11.520 - 11.569: 99.8915% ( 1) 00:13:48.099 3982.572 - 4007.778: 100.0000% ( 21) 00:13:48.099 00:13:48.099 Complete histogram 00:13:48.099 ================== 00:13:48.099 Range in us Cumulative Count 00:13:48.099 2.191 - 2.203: 0.0052% ( 1) 00:13:48.099 2.203 - 2.215: 0.0155% ( 2) 00:13:48.099 2.215 - 2.228: 0.8160% ( 155) 00:13:48.099 2.228 - 2.240: 0.9864% ( 33) 00:13:48.099 2.240 - 2.252: 1.1723% ( 36) 00:13:48.099 2.252 - 2.265: 1.8231% ( 126) 00:13:48.099 2.265 - 2.277: 32.3813% ( 5917) 00:13:48.099 2.277 - 2.289: 40.0506% ( 1485) 00:13:48.099 2.289 - 2.302: 56.3291% ( 3152) 00:13:48.099 2.302 - 2.314: 76.1142% ( 3831) 00:13:48.099 2.314 - 2.326: 80.3749% ( 825) 00:13:48.099 2.326 - 2.338: 83.0088% ( 510) 00:13:48.099 2.338 - 2.351: 87.8841% ( 944) 00:13:48.099 2.351 - 2.363: 91.9641% ( 790) 00:13:48.099 2.363 - 2.375: 95.6308% ( 710) 00:13:48.099 2.375 - 2.388: 98.1201% ( 482) 00:13:48.099 2.388 - 2.400: 99.0446% ( 179) 00:13:48.099 2.400 - 2.412: 99.3235% ( 54) 00:13:48.099 2.412 - 2.425: 99.4112% ( 17) 00:13:48.099 2.425 - 2.437: 99.4267% ( 3) 00:13:48.099 2.437 - 2.449: 99.4371% ( 2) 00:13:48.099 4.357 - 4.382: 99.4422% ( 1) 00:13:48.099 4.382 - [2024-06-10 11:20:45.233370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.099 4.406: 99.4526% ( 2) 00:13:48.099 4.505 - 4.529: 99.4629% ( 2) 00:13:48.099 4.554 - 4.578: 99.4681% ( 1) 00:13:48.099 4.603 - 4.628: 99.4784% ( 2) 00:13:48.099 4.652 - 4.677: 99.4836% ( 1) 00:13:48.099 4.677 - 4.702: 99.4939% ( 2) 00:13:48.099 4.702 - 4.726: 99.5042% ( 2) 00:13:48.099 4.726 - 4.751: 99.5094% ( 1) 00:13:48.099 4.751 - 4.775: 99.5145% ( 1) 00:13:48.099 4.775 - 4.800: 99.5197% ( 1) 00:13:48.099 4.849 - 4.874: 99.5249% ( 1) 00:13:48.099 4.948 - 4.972: 99.5300% ( 1) 00:13:48.099 5.071 - 5.095: 99.5455% ( 3) 00:13:48.099 5.218 - 5.243: 99.5507% ( 1) 00:13:48.099 5.243 - 5.268: 99.5610% ( 2) 00:13:48.099 5.292 - 5.317: 99.5662% ( 1) 00:13:48.099 5.391 - 5.415: 99.5713% ( 1) 00:13:48.099 5.415 - 5.440: 99.5765% ( 1) 00:13:48.099 5.440 - 5.465: 99.5817% ( 1) 00:13:48.099 5.465 - 5.489: 99.5868% ( 1) 00:13:48.099 5.612 - 5.637: 99.5920% ( 1) 00:13:48.099 5.662 - 5.686: 99.5972% ( 1) 00:13:48.099 5.711 - 5.735: 99.6075% ( 2) 00:13:48.099 5.760 - 5.785: 99.6127% ( 1) 00:13:48.099 5.834 - 5.858: 99.6178% ( 1) 00:13:48.099 6.302 - 6.351: 99.6230% ( 1) 00:13:48.099 9.895 - 9.945: 99.6282% ( 1) 00:13:48.099 10.880 - 10.929: 99.6333% ( 1) 00:13:48.099 3982.572 - 4007.778: 99.9948% ( 70) 00:13:48.099 4007.778 - 4032.985: 100.0000% ( 1) 00:13:48.099 00:13:48.099 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:48.099 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:48.099 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:48.099 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:48.099 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.361 [ 00:13:48.361 { 00:13:48.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.361 "subtype": "Discovery", 00:13:48.361 "listen_addresses": [], 00:13:48.361 "allow_any_host": true, 00:13:48.361 "hosts": [] 00:13:48.361 }, 00:13:48.361 { 00:13:48.361 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.361 "subtype": "NVMe", 00:13:48.361 "listen_addresses": [ 00:13:48.361 { 00:13:48.361 "trtype": "VFIOUSER", 00:13:48.361 "adrfam": "IPv4", 00:13:48.361 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.361 "trsvcid": "0" 00:13:48.361 } 00:13:48.361 ], 00:13:48.361 "allow_any_host": true, 00:13:48.361 "hosts": [], 00:13:48.361 "serial_number": "SPDK1", 00:13:48.361 "model_number": "SPDK bdev Controller", 00:13:48.361 "max_namespaces": 32, 00:13:48.361 "min_cntlid": 1, 00:13:48.361 "max_cntlid": 65519, 00:13:48.361 "namespaces": [ 00:13:48.361 { 00:13:48.361 "nsid": 1, 00:13:48.361 "bdev_name": "Malloc1", 00:13:48.361 "name": "Malloc1", 00:13:48.361 "nguid": "D023CE3E0452434BBD51DB2BF41D331B", 00:13:48.361 "uuid": "d023ce3e-0452-434b-bd51-db2bf41d331b" 00:13:48.361 } 00:13:48.361 ] 00:13:48.361 }, 00:13:48.361 { 00:13:48.361 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.361 "subtype": "NVMe", 00:13:48.361 "listen_addresses": [ 00:13:48.361 { 00:13:48.361 "trtype": "VFIOUSER", 00:13:48.361 "adrfam": "IPv4", 00:13:48.361 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.361 "trsvcid": "0" 00:13:48.361 } 00:13:48.361 ], 00:13:48.361 "allow_any_host": true, 00:13:48.361 "hosts": [], 00:13:48.361 "serial_number": "SPDK2", 00:13:48.361 "model_number": "SPDK bdev Controller", 00:13:48.361 "max_namespaces": 32, 00:13:48.361 "min_cntlid": 1, 00:13:48.361 "max_cntlid": 65519, 00:13:48.361 "namespaces": [ 00:13:48.361 { 00:13:48.361 "nsid": 1, 00:13:48.361 "bdev_name": "Malloc2", 00:13:48.361 "name": "Malloc2", 00:13:48.361 "nguid": "F0B360D394364F9DBE53FCB7CEB6FC4C", 00:13:48.361 "uuid": "f0b360d3-9436-4f9d-be53-fcb7ceb6fc4c" 00:13:48.361 } 00:13:48.361 ] 00:13:48.361 } 00:13:48.361 ] 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1463225 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:48.361 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:48.361 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.622 [2024-06-10 11:20:45.651268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.622 Malloc3 00:13:48.622 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:48.883 [2024-06-10 11:20:45.884076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.883 11:20:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.883 Asynchronous Event Request test 00:13:48.883 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.883 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.883 Registering asynchronous event callbacks... 00:13:48.883 Starting namespace attribute notice tests for all controllers... 00:13:48.883 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:48.883 aer_cb - Changed Namespace 00:13:48.883 Cleaning up... 00:13:48.883 [ 00:13:48.883 { 00:13:48.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.883 "subtype": "Discovery", 00:13:48.883 "listen_addresses": [], 00:13:48.883 "allow_any_host": true, 00:13:48.883 "hosts": [] 00:13:48.883 }, 00:13:48.883 { 00:13:48.883 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.883 "subtype": "NVMe", 00:13:48.883 "listen_addresses": [ 00:13:48.883 { 00:13:48.883 "trtype": "VFIOUSER", 00:13:48.883 "adrfam": "IPv4", 00:13:48.883 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.883 "trsvcid": "0" 00:13:48.883 } 00:13:48.883 ], 00:13:48.883 "allow_any_host": true, 00:13:48.883 "hosts": [], 00:13:48.883 "serial_number": "SPDK1", 00:13:48.883 "model_number": "SPDK bdev Controller", 00:13:48.883 "max_namespaces": 32, 00:13:48.883 "min_cntlid": 1, 00:13:48.883 "max_cntlid": 65519, 00:13:48.883 "namespaces": [ 00:13:48.883 { 00:13:48.883 "nsid": 1, 00:13:48.883 "bdev_name": "Malloc1", 00:13:48.883 "name": "Malloc1", 00:13:48.883 "nguid": "D023CE3E0452434BBD51DB2BF41D331B", 00:13:48.883 "uuid": "d023ce3e-0452-434b-bd51-db2bf41d331b" 00:13:48.883 }, 00:13:48.883 { 00:13:48.883 "nsid": 2, 00:13:48.883 "bdev_name": "Malloc3", 00:13:48.883 "name": "Malloc3", 00:13:48.883 "nguid": "B46BF12D29C14F5EB8FF178F9DBD713F", 00:13:48.883 "uuid": "b46bf12d-29c1-4f5e-b8ff-178f9dbd713f" 00:13:48.883 } 00:13:48.883 ] 00:13:48.883 }, 00:13:48.883 { 00:13:48.883 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.883 "subtype": "NVMe", 00:13:48.883 "listen_addresses": [ 00:13:48.883 { 00:13:48.883 "trtype": "VFIOUSER", 00:13:48.883 "adrfam": "IPv4", 00:13:48.883 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.883 "trsvcid": "0" 00:13:48.883 } 00:13:48.883 ], 00:13:48.883 "allow_any_host": true, 00:13:48.883 "hosts": [], 00:13:48.883 "serial_number": "SPDK2", 00:13:48.883 "model_number": "SPDK bdev Controller", 00:13:48.883 "max_namespaces": 32, 00:13:48.883 "min_cntlid": 1, 00:13:48.883 "max_cntlid": 65519, 00:13:48.883 "namespaces": [ 00:13:48.883 { 00:13:48.883 "nsid": 1, 00:13:48.883 "bdev_name": "Malloc2", 00:13:48.883 "name": "Malloc2", 00:13:48.884 "nguid": "F0B360D394364F9DBE53FCB7CEB6FC4C", 00:13:48.884 "uuid": "f0b360d3-9436-4f9d-be53-fcb7ceb6fc4c" 00:13:48.884 } 00:13:48.884 ] 00:13:48.884 } 00:13:48.884 ] 00:13:49.147 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1463225 00:13:49.147 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.147 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:49.147 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:49.147 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:49.147 [2024-06-10 11:20:46.143104] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:13:49.147 [2024-06-10 11:20:46.143143] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463248 ] 00:13:49.147 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.147 [2024-06-10 11:20:46.173548] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:49.147 [2024-06-10 11:20:46.182053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.147 [2024-06-10 11:20:46.182074] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd123cff000 00:13:49.147 [2024-06-10 11:20:46.183051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.184054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.185060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.186070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.187080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.188081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.189089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.190094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.147 [2024-06-10 11:20:46.191110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.147 [2024-06-10 11:20:46.191121] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd123cf4000 00:13:49.147 [2024-06-10 11:20:46.192345] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.147 [2024-06-10 11:20:46.212276] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:49.147 [2024-06-10 11:20:46.212298] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:49.147 [2024-06-10 11:20:46.214356] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.147 [2024-06-10 11:20:46.214396] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:49.147 [2024-06-10 11:20:46.214469] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:49.147 [2024-06-10 11:20:46.214485] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:49.147 [2024-06-10 11:20:46.214490] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:49.147 [2024-06-10 11:20:46.215368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:49.147 [2024-06-10 11:20:46.215377] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:49.147 [2024-06-10 11:20:46.215383] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:49.147 [2024-06-10 11:20:46.216375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.147 [2024-06-10 11:20:46.216385] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:49.147 [2024-06-10 11:20:46.216391] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.217376] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:49.148 [2024-06-10 11:20:46.217385] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.218382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:49.148 [2024-06-10 11:20:46.218390] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:49.148 [2024-06-10 11:20:46.218394] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.218401] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.218505] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:49.148 [2024-06-10 11:20:46.218510] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.218514] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:49.148 [2024-06-10 11:20:46.219385] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:49.148 [2024-06-10 11:20:46.220397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:49.148 [2024-06-10 11:20:46.221403] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.148 [2024-06-10 11:20:46.222404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.148 [2024-06-10 11:20:46.222444] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:49.148 [2024-06-10 11:20:46.223417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:49.148 [2024-06-10 11:20:46.223424] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:49.148 [2024-06-10 11:20:46.223429] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.223450] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:49.148 [2024-06-10 11:20:46.223460] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.223473] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.148 [2024-06-10 11:20:46.223478] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.148 [2024-06-10 11:20:46.223489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.229841] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:49.148 [2024-06-10 11:20:46.229846] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:49.148 [2024-06-10 11:20:46.229850] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:49.148 [2024-06-10 11:20:46.229854] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:49.148 [2024-06-10 11:20:46.229858] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:49.148 [2024-06-10 11:20:46.229863] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:49.148 [2024-06-10 11:20:46.229867] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.229874] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.229885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.237828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.237842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.148 [2024-06-10 11:20:46.237851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.148 [2024-06-10 11:20:46.237859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.148 [2024-06-10 11:20:46.237866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.148 [2024-06-10 11:20:46.237871] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.237878] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.237886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.245827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.245837] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:49.148 [2024-06-10 11:20:46.245845] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.245851] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.245857] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.245865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.253827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.253876] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.253884] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.253891] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:49.148 [2024-06-10 11:20:46.253895] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:49.148 [2024-06-10 11:20:46.253901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.261827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.261837] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:49.148 [2024-06-10 11:20:46.261845] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.261852] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.261858] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.148 [2024-06-10 11:20:46.261862] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.148 [2024-06-10 11:20:46.261868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.269828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:49.148 [2024-06-10 11:20:46.269841] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.269848] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:49.148 [2024-06-10 11:20:46.269855] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.148 [2024-06-10 11:20:46.269858] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.148 [2024-06-10 11:20:46.269864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.148 [2024-06-10 11:20:46.277828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.277837] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277843] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277852] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277857] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277862] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277866] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:49.149 [2024-06-10 11:20:46.277870] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:49.149 [2024-06-10 11:20:46.277874] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:49.149 [2024-06-10 11:20:46.277893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.285827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.285840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.293828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.293840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.301827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.301839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.309827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.309839] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:49.149 [2024-06-10 11:20:46.309844] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:49.149 [2024-06-10 11:20:46.309847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:49.149 [2024-06-10 11:20:46.309850] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:49.149 [2024-06-10 11:20:46.309856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:49.149 [2024-06-10 11:20:46.309863] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:49.149 [2024-06-10 11:20:46.309866] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:49.149 [2024-06-10 11:20:46.309872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.309878] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:49.149 [2024-06-10 11:20:46.309882] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.149 [2024-06-10 11:20:46.309887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.309894] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:49.149 [2024-06-10 11:20:46.309900] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:49.149 [2024-06-10 11:20:46.309906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:49.149 [2024-06-10 11:20:46.317828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.317843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.317851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:49.149 [2024-06-10 11:20:46.317859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:49.149 ===================================================== 00:13:49.149 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:49.149 ===================================================== 00:13:49.149 Controller Capabilities/Features 00:13:49.149 ================================ 00:13:49.149 Vendor ID: 4e58 00:13:49.149 Subsystem Vendor ID: 4e58 00:13:49.149 Serial Number: SPDK2 00:13:49.149 Model Number: SPDK bdev Controller 00:13:49.149 Firmware Version: 24.09 00:13:49.149 Recommended Arb Burst: 6 00:13:49.149 IEEE OUI Identifier: 8d 6b 50 00:13:49.149 Multi-path I/O 00:13:49.149 May have multiple subsystem ports: Yes 00:13:49.149 May have multiple controllers: Yes 00:13:49.149 Associated with SR-IOV VF: No 00:13:49.149 Max Data Transfer Size: 131072 00:13:49.149 Max Number of Namespaces: 32 00:13:49.149 Max Number of I/O Queues: 127 00:13:49.149 NVMe Specification Version (VS): 1.3 00:13:49.149 NVMe Specification Version (Identify): 1.3 00:13:49.149 Maximum Queue Entries: 256 00:13:49.149 Contiguous Queues Required: Yes 00:13:49.149 Arbitration Mechanisms Supported 00:13:49.149 Weighted Round Robin: Not Supported 00:13:49.149 Vendor Specific: Not Supported 00:13:49.149 Reset Timeout: 15000 ms 00:13:49.149 Doorbell Stride: 4 bytes 00:13:49.149 NVM Subsystem Reset: Not Supported 00:13:49.149 Command Sets Supported 00:13:49.149 NVM Command Set: Supported 00:13:49.149 Boot Partition: Not Supported 00:13:49.149 Memory Page Size Minimum: 4096 bytes 00:13:49.149 Memory Page Size Maximum: 4096 bytes 00:13:49.149 Persistent Memory Region: Not Supported 00:13:49.149 Optional Asynchronous Events Supported 00:13:49.149 Namespace Attribute Notices: Supported 00:13:49.149 Firmware Activation Notices: Not Supported 00:13:49.149 ANA Change Notices: Not Supported 00:13:49.149 PLE Aggregate Log Change Notices: Not Supported 00:13:49.149 LBA Status Info Alert Notices: Not Supported 00:13:49.149 EGE Aggregate Log Change Notices: Not Supported 00:13:49.149 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.149 Zone Descriptor Change Notices: Not Supported 00:13:49.149 Discovery Log Change Notices: Not Supported 00:13:49.149 Controller Attributes 00:13:49.149 128-bit Host Identifier: Supported 00:13:49.149 Non-Operational Permissive Mode: Not Supported 00:13:49.149 NVM Sets: Not Supported 00:13:49.149 Read Recovery Levels: Not Supported 00:13:49.149 Endurance Groups: Not Supported 00:13:49.149 Predictable Latency Mode: Not Supported 00:13:49.149 Traffic Based Keep ALive: Not Supported 00:13:49.149 Namespace Granularity: Not Supported 00:13:49.149 SQ Associations: Not Supported 00:13:49.149 UUID List: Not Supported 00:13:49.149 Multi-Domain Subsystem: Not Supported 00:13:49.149 Fixed Capacity Management: Not Supported 00:13:49.149 Variable Capacity Management: Not Supported 00:13:49.149 Delete Endurance Group: Not Supported 00:13:49.149 Delete NVM Set: Not Supported 00:13:49.149 Extended LBA Formats Supported: Not Supported 00:13:49.149 Flexible Data Placement Supported: Not Supported 00:13:49.149 00:13:49.149 Controller Memory Buffer Support 00:13:49.149 ================================ 00:13:49.149 Supported: No 00:13:49.149 00:13:49.149 Persistent Memory Region Support 00:13:49.149 ================================ 00:13:49.149 Supported: No 00:13:49.149 00:13:49.149 Admin Command Set Attributes 00:13:49.149 ============================ 00:13:49.149 Security Send/Receive: Not Supported 00:13:49.149 Format NVM: Not Supported 00:13:49.149 Firmware Activate/Download: Not Supported 00:13:49.149 Namespace Management: Not Supported 00:13:49.149 Device Self-Test: Not Supported 00:13:49.149 Directives: Not Supported 00:13:49.149 NVMe-MI: Not Supported 00:13:49.149 Virtualization Management: Not Supported 00:13:49.149 Doorbell Buffer Config: Not Supported 00:13:49.149 Get LBA Status Capability: Not Supported 00:13:49.149 Command & Feature Lockdown Capability: Not Supported 00:13:49.149 Abort Command Limit: 4 00:13:49.149 Async Event Request Limit: 4 00:13:49.149 Number of Firmware Slots: N/A 00:13:49.149 Firmware Slot 1 Read-Only: N/A 00:13:49.149 Firmware Activation Without Reset: N/A 00:13:49.150 Multiple Update Detection Support: N/A 00:13:49.150 Firmware Update Granularity: No Information Provided 00:13:49.150 Per-Namespace SMART Log: No 00:13:49.150 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.150 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:49.150 Command Effects Log Page: Supported 00:13:49.150 Get Log Page Extended Data: Supported 00:13:49.150 Telemetry Log Pages: Not Supported 00:13:49.150 Persistent Event Log Pages: Not Supported 00:13:49.150 Supported Log Pages Log Page: May Support 00:13:49.150 Commands Supported & Effects Log Page: Not Supported 00:13:49.150 Feature Identifiers & Effects Log Page:May Support 00:13:49.150 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.150 Data Area 4 for Telemetry Log: Not Supported 00:13:49.150 Error Log Page Entries Supported: 128 00:13:49.150 Keep Alive: Supported 00:13:49.150 Keep Alive Granularity: 10000 ms 00:13:49.150 00:13:49.150 NVM Command Set Attributes 00:13:49.150 ========================== 00:13:49.150 Submission Queue Entry Size 00:13:49.150 Max: 64 00:13:49.150 Min: 64 00:13:49.150 Completion Queue Entry Size 00:13:49.150 Max: 16 00:13:49.150 Min: 16 00:13:49.150 Number of Namespaces: 32 00:13:49.150 Compare Command: Supported 00:13:49.150 Write Uncorrectable Command: Not Supported 00:13:49.150 Dataset Management Command: Supported 00:13:49.150 Write Zeroes Command: Supported 00:13:49.150 Set Features Save Field: Not Supported 00:13:49.150 Reservations: Not Supported 00:13:49.150 Timestamp: Not Supported 00:13:49.150 Copy: Supported 00:13:49.150 Volatile Write Cache: Present 00:13:49.150 Atomic Write Unit (Normal): 1 00:13:49.150 Atomic Write Unit (PFail): 1 00:13:49.150 Atomic Compare & Write Unit: 1 00:13:49.150 Fused Compare & Write: Supported 00:13:49.150 Scatter-Gather List 00:13:49.150 SGL Command Set: Supported (Dword aligned) 00:13:49.150 SGL Keyed: Not Supported 00:13:49.150 SGL Bit Bucket Descriptor: Not Supported 00:13:49.150 SGL Metadata Pointer: Not Supported 00:13:49.150 Oversized SGL: Not Supported 00:13:49.150 SGL Metadata Address: Not Supported 00:13:49.150 SGL Offset: Not Supported 00:13:49.150 Transport SGL Data Block: Not Supported 00:13:49.150 Replay Protected Memory Block: Not Supported 00:13:49.150 00:13:49.150 Firmware Slot Information 00:13:49.150 ========================= 00:13:49.150 Active slot: 1 00:13:49.150 Slot 1 Firmware Revision: 24.09 00:13:49.150 00:13:49.150 00:13:49.150 Commands Supported and Effects 00:13:49.150 ============================== 00:13:49.150 Admin Commands 00:13:49.150 -------------- 00:13:49.150 Get Log Page (02h): Supported 00:13:49.150 Identify (06h): Supported 00:13:49.150 Abort (08h): Supported 00:13:49.150 Set Features (09h): Supported 00:13:49.150 Get Features (0Ah): Supported 00:13:49.150 Asynchronous Event Request (0Ch): Supported 00:13:49.150 Keep Alive (18h): Supported 00:13:49.150 I/O Commands 00:13:49.150 ------------ 00:13:49.150 Flush (00h): Supported LBA-Change 00:13:49.150 Write (01h): Supported LBA-Change 00:13:49.150 Read (02h): Supported 00:13:49.150 Compare (05h): Supported 00:13:49.150 Write Zeroes (08h): Supported LBA-Change 00:13:49.150 Dataset Management (09h): Supported LBA-Change 00:13:49.150 Copy (19h): Supported LBA-Change 00:13:49.150 Unknown (79h): Supported LBA-Change 00:13:49.150 Unknown (7Ah): Supported 00:13:49.150 00:13:49.150 Error Log 00:13:49.150 ========= 00:13:49.150 00:13:49.150 Arbitration 00:13:49.150 =========== 00:13:49.150 Arbitration Burst: 1 00:13:49.150 00:13:49.150 Power Management 00:13:49.150 ================ 00:13:49.150 Number of Power States: 1 00:13:49.150 Current Power State: Power State #0 00:13:49.150 Power State #0: 00:13:49.150 Max Power: 0.00 W 00:13:49.150 Non-Operational State: Operational 00:13:49.150 Entry Latency: Not Reported 00:13:49.150 Exit Latency: Not Reported 00:13:49.150 Relative Read Throughput: 0 00:13:49.150 Relative Read Latency: 0 00:13:49.150 Relative Write Throughput: 0 00:13:49.150 Relative Write Latency: 0 00:13:49.150 Idle Power: Not Reported 00:13:49.150 Active Power: Not Reported 00:13:49.150 Non-Operational Permissive Mode: Not Supported 00:13:49.150 00:13:49.150 Health Information 00:13:49.150 ================== 00:13:49.150 Critical Warnings: 00:13:49.150 Available Spare Space: OK 00:13:49.150 Temperature: OK 00:13:49.150 Device Reliability: OK 00:13:49.150 Read Only: No 00:13:49.150 Volatile Memory Backup: OK 00:13:49.150 Current Temperature: 0 Kelvin (-2[2024-06-10 11:20:46.317949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:49.150 [2024-06-10 11:20:46.325826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:49.150 [2024-06-10 11:20:46.325851] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:49.150 [2024-06-10 11:20:46.325859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.150 [2024-06-10 11:20:46.325865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.150 [2024-06-10 11:20:46.325871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.150 [2024-06-10 11:20:46.325877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.150 [2024-06-10 11:20:46.325911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.150 [2024-06-10 11:20:46.325921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:49.150 [2024-06-10 11:20:46.326921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.150 [2024-06-10 11:20:46.326966] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:49.150 [2024-06-10 11:20:46.326972] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:49.150 [2024-06-10 11:20:46.327931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:49.150 [2024-06-10 11:20:46.327942] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:49.150 [2024-06-10 11:20:46.327989] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:49.150 [2024-06-10 11:20:46.329264] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.150 73 Celsius) 00:13:49.150 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:49.150 Available Spare: 0% 00:13:49.150 Available Spare Threshold: 0% 00:13:49.150 Life Percentage Used: 0% 00:13:49.150 Data Units Read: 0 00:13:49.150 Data Units Written: 0 00:13:49.150 Host Read Commands: 0 00:13:49.150 Host Write Commands: 0 00:13:49.150 Controller Busy Time: 0 minutes 00:13:49.150 Power Cycles: 0 00:13:49.150 Power On Hours: 0 hours 00:13:49.150 Unsafe Shutdowns: 0 00:13:49.150 Unrecoverable Media Errors: 0 00:13:49.150 Lifetime Error Log Entries: 0 00:13:49.150 Warning Temperature Time: 0 minutes 00:13:49.150 Critical Temperature Time: 0 minutes 00:13:49.150 00:13:49.150 Number of Queues 00:13:49.150 ================ 00:13:49.150 Number of I/O Submission Queues: 127 00:13:49.150 Number of I/O Completion Queues: 127 00:13:49.150 00:13:49.150 Active Namespaces 00:13:49.150 ================= 00:13:49.150 Namespace ID:1 00:13:49.150 Error Recovery Timeout: Unlimited 00:13:49.150 Command Set Identifier: NVM (00h) 00:13:49.150 Deallocate: Supported 00:13:49.150 Deallocated/Unwritten Error: Not Supported 00:13:49.150 Deallocated Read Value: Unknown 00:13:49.150 Deallocate in Write Zeroes: Not Supported 00:13:49.150 Deallocated Guard Field: 0xFFFF 00:13:49.150 Flush: Supported 00:13:49.150 Reservation: Supported 00:13:49.150 Namespace Sharing Capabilities: Multiple Controllers 00:13:49.150 Size (in LBAs): 131072 (0GiB) 00:13:49.151 Capacity (in LBAs): 131072 (0GiB) 00:13:49.151 Utilization (in LBAs): 131072 (0GiB) 00:13:49.151 NGUID: F0B360D394364F9DBE53FCB7CEB6FC4C 00:13:49.151 UUID: f0b360d3-9436-4f9d-be53-fcb7ceb6fc4c 00:13:49.151 Thin Provisioning: Not Supported 00:13:49.151 Per-NS Atomic Units: Yes 00:13:49.151 Atomic Boundary Size (Normal): 0 00:13:49.151 Atomic Boundary Size (PFail): 0 00:13:49.151 Atomic Boundary Offset: 0 00:13:49.151 Maximum Single Source Range Length: 65535 00:13:49.151 Maximum Copy Length: 65535 00:13:49.151 Maximum Source Range Count: 1 00:13:49.151 NGUID/EUI64 Never Reused: No 00:13:49.151 Namespace Write Protected: No 00:13:49.151 Number of LBA Formats: 1 00:13:49.151 Current LBA Format: LBA Format #00 00:13:49.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.151 00:13:49.412 11:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:49.412 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.412 [2024-06-10 11:20:46.519378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.778 Initializing NVMe Controllers 00:13:54.778 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:54.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:54.778 Initialization complete. Launching workers. 00:13:54.778 ======================================================== 00:13:54.778 Latency(us) 00:13:54.778 Device Information : IOPS MiB/s Average min max 00:13:54.778 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 43952.34 171.69 2911.66 927.82 6388.86 00:13:54.778 ======================================================== 00:13:54.779 Total : 43952.34 171.69 2911.66 927.82 6388.86 00:13:54.779 00:13:54.779 [2024-06-10 11:20:51.624020] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.779 11:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:54.779 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.779 [2024-06-10 11:20:51.818649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.062 Initializing NVMe Controllers 00:14:00.062 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:00.062 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:00.062 Initialization complete. Launching workers. 00:14:00.062 ======================================================== 00:14:00.062 Latency(us) 00:14:00.062 Device Information : IOPS MiB/s Average min max 00:14:00.062 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39618.48 154.76 3230.17 1120.62 6599.36 00:14:00.062 ======================================================== 00:14:00.062 Total : 39618.48 154.76 3230.17 1120.62 6599.36 00:14:00.062 00:14:00.062 [2024-06-10 11:20:56.840216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.062 11:20:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:00.062 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.062 [2024-06-10 11:20:57.060956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.338 [2024-06-10 11:21:02.197912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.338 Initializing NVMe Controllers 00:14:05.338 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.338 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:05.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:05.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:05.338 Initialization complete. Launching workers. 00:14:05.338 Starting thread on core 2 00:14:05.338 Starting thread on core 3 00:14:05.338 Starting thread on core 1 00:14:05.338 11:21:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:05.338 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.338 [2024-06-10 11:21:02.476347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.628 [2024-06-10 11:21:05.527901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.628 Initializing NVMe Controllers 00:14:08.629 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.629 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.629 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:08.629 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:08.629 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:08.629 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:08.629 Initialization complete. Launching workers. 00:14:08.629 Starting thread on core 1 with urgent priority queue 00:14:08.629 Starting thread on core 2 with urgent priority queue 00:14:08.629 Starting thread on core 3 with urgent priority queue 00:14:08.629 Starting thread on core 0 with urgent priority queue 00:14:08.629 SPDK bdev Controller (SPDK2 ) core 0: 13764.00 IO/s 7.27 secs/100000 ios 00:14:08.629 SPDK bdev Controller (SPDK2 ) core 1: 12407.00 IO/s 8.06 secs/100000 ios 00:14:08.629 SPDK bdev Controller (SPDK2 ) core 2: 9910.67 IO/s 10.09 secs/100000 ios 00:14:08.629 SPDK bdev Controller (SPDK2 ) core 3: 13817.67 IO/s 7.24 secs/100000 ios 00:14:08.629 ======================================================== 00:14:08.629 00:14:08.629 11:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.629 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.629 [2024-06-10 11:21:05.792250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.629 Initializing NVMe Controllers 00:14:08.629 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.629 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.629 Namespace ID: 1 size: 0GB 00:14:08.629 Initialization complete. 00:14:08.629 INFO: using host memory buffer for IO 00:14:08.629 Hello world! 00:14:08.629 [2024-06-10 11:21:05.804327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.629 11:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.887 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.887 [2024-06-10 11:21:06.061396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.280 Initializing NVMe Controllers 00:14:10.280 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.280 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.280 Initialization complete. Launching workers. 00:14:10.280 submit (in ns) avg, min, max = 7187.5, 3629.2, 3999446.9 00:14:10.280 complete (in ns) avg, min, max = 20973.8, 2194.6, 4996166.2 00:14:10.280 00:14:10.280 Submit histogram 00:14:10.280 ================ 00:14:10.280 Range in us Cumulative Count 00:14:10.280 3.618 - 3.643: 0.9740% ( 160) 00:14:10.280 3.643 - 3.668: 4.3282% ( 551) 00:14:10.280 3.668 - 3.692: 12.0533% ( 1269) 00:14:10.280 3.692 - 3.717: 22.1404% ( 1657) 00:14:10.280 3.717 - 3.742: 31.4969% ( 1537) 00:14:10.280 3.742 - 3.766: 42.0892% ( 1740) 00:14:10.280 3.766 - 3.791: 54.3678% ( 2017) 00:14:10.280 3.791 - 3.815: 70.6155% ( 2669) 00:14:10.280 3.815 - 3.840: 84.6777% ( 2310) 00:14:10.280 3.840 - 3.865: 93.3768% ( 1429) 00:14:10.280 3.865 - 3.889: 97.7233% ( 714) 00:14:10.280 3.889 - 3.914: 99.0869% ( 224) 00:14:10.280 3.914 - 3.938: 99.4947% ( 67) 00:14:10.280 3.938 - 3.963: 99.5556% ( 10) 00:14:10.280 3.963 - 3.988: 99.5860% ( 5) 00:14:10.280 3.988 - 4.012: 99.5982% ( 2) 00:14:10.280 4.012 - 4.037: 99.6165% ( 3) 00:14:10.280 4.086 - 4.111: 99.6287% ( 2) 00:14:10.280 4.111 - 4.135: 99.6347% ( 1) 00:14:10.280 4.283 - 4.308: 99.6408% ( 1) 00:14:10.280 4.972 - 4.997: 99.6469% ( 1) 00:14:10.280 5.760 - 5.785: 99.6530% ( 1) 00:14:10.280 5.785 - 5.809: 99.6591% ( 1) 00:14:10.280 5.858 - 5.883: 99.6652% ( 1) 00:14:10.280 5.932 - 5.957: 99.6713% ( 1) 00:14:10.280 6.031 - 6.055: 99.6774% ( 1) 00:14:10.280 6.080 - 6.105: 99.6895% ( 2) 00:14:10.280 6.178 - 6.203: 99.7017% ( 2) 00:14:10.280 6.302 - 6.351: 99.7078% ( 1) 00:14:10.280 6.351 - 6.400: 99.7139% ( 1) 00:14:10.280 6.400 - 6.449: 99.7261% ( 2) 00:14:10.280 6.449 - 6.498: 99.7443% ( 3) 00:14:10.280 6.498 - 6.548: 99.7687% ( 4) 00:14:10.280 6.548 - 6.597: 99.7748% ( 1) 00:14:10.280 6.597 - 6.646: 99.7930% ( 3) 00:14:10.280 6.745 - 6.794: 99.8113% ( 3) 00:14:10.280 6.794 - 6.843: 99.8235% ( 2) 00:14:10.280 6.843 - 6.892: 99.8356% ( 2) 00:14:10.280 6.892 - 6.942: 99.8478% ( 2) 00:14:10.280 7.089 - 7.138: 99.8661% ( 3) 00:14:10.280 7.335 - 7.385: 99.8722% ( 1) 00:14:10.280 7.385 - 7.434: 99.8782% ( 1) 00:14:10.280 7.483 - 7.532: 99.8843% ( 1) 00:14:10.280 7.729 - 7.778: 99.8904% ( 1) 00:14:10.280 8.468 - 8.517: 99.8965% ( 1) 00:14:10.280 9.108 - 9.157: 99.9026% ( 1) 00:14:10.280 11.668 - 11.717: 99.9087% ( 1) 00:14:10.280 13.095 - 13.194: 99.9148% ( 1) 00:14:10.280 3982.572 - 4007.778: 100.0000% ( 14) 00:14:10.280 00:14:10.280 Complete histogram 00:14:10.280 ================== 00:14:10.280 Range in us Cumulative Count 00:14:10.280 2.191 - 2.203: 0.0061% ( 1) 00:14:10.280 2.203 - 2.215: 1.1323% ( 185) 00:14:10.280 2.215 - 2.228: 1.4245% ( 48) 00:14:10.280 2.228 - 2.240: 1.6254% ( 33) 00:14:10.280 2.240 - 2.252: 3.8534% ( 366) 00:14:10.280 2.252 - 2.265: 36.0626% ( 5291) 00:14:10.280 2.265 - 2.277: 42.4667% ( 1052) 00:14:10.280 2.277 - 2.289: 61.0032% ( 3045) 00:14:10.280 2.289 - 2.302: 77.9083% ( 2777) 00:14:10.280 2.302 - 2.314: 80.8608% ( 485) 00:14:10.280 2.314 - 2.326: 82.6749% ( 298) 00:14:10.280 2.326 - 2.338: 86.8205% ( 681) 00:14:10.280 2.338 - 2.351: 91.6722% ( 797) 00:14:10.280 2.351 - 2.363: 95.2152% ( 582) 00:14:10.280 2.363 - 2.375: 97.9363% ( 447) 00:14:10.280 2.375 - 2.388: 99.0990% ( 191) 00:14:10.280 2.388 - 2.400: 99.3182% ( 36) 00:14:10.280 2.400 - 2.412: 99.3547% ( 6) 00:14:10.280 2.412 - 2.425: 99.3669% ( 2) 00:14:10.280 2.474 - 2.486: 99.3730% ( 1) 00:14:10.280 4.332 - 4.357: 99.3791% ( 1) 00:14:10.280 4.529 - 4.554: 99.3852% ( 1) 00:14:10.280 4.726 - 4.751: 99.3912% ( 1) 00:14:10.280 4.751 - 4.775: 99.3973% ( 1) 00:14:10.280 4.825 - 4.849: 99.4034% ( 1) 00:14:10.280 4.948 - 4.972: 99.4095% ( 1) 00:14:10.280 5.046 - 5.071: 99.4156% ( 1) 00:14:10.280 5.095 - 5.120: 99.4278% ( 2) 00:14:10.280 5.120 - [2024-06-10 11:21:07.157217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.280 5.145: 99.4399% ( 2) 00:14:10.280 5.194 - 5.218: 99.4460% ( 1) 00:14:10.280 5.243 - 5.268: 99.4521% ( 1) 00:14:10.280 5.292 - 5.317: 99.4582% ( 1) 00:14:10.280 5.317 - 5.342: 99.4643% ( 1) 00:14:10.280 5.342 - 5.366: 99.4704% ( 1) 00:14:10.280 5.489 - 5.514: 99.4765% ( 1) 00:14:10.280 5.563 - 5.588: 99.4886% ( 2) 00:14:10.280 5.858 - 5.883: 99.4947% ( 1) 00:14:10.280 6.055 - 6.080: 99.5008% ( 1) 00:14:10.280 8.468 - 8.517: 99.5069% ( 1) 00:14:10.280 9.600 - 9.649: 99.5130% ( 1) 00:14:10.280 9.698 - 9.748: 99.5191% ( 1) 00:14:10.280 10.338 - 10.388: 99.5252% ( 1) 00:14:10.280 146.511 - 147.298: 99.5313% ( 1) 00:14:10.280 3024.738 - 3037.342: 99.5434% ( 2) 00:14:10.280 3075.151 - 3087.754: 99.5495% ( 1) 00:14:10.280 3982.572 - 4007.778: 99.9878% ( 72) 00:14:10.280 4990.818 - 5016.025: 100.0000% ( 2) 00:14:10.280 00:14:10.280 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:10.280 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:10.280 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:10.280 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:10.280 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.280 [ 00:14:10.280 { 00:14:10.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.280 "subtype": "Discovery", 00:14:10.280 "listen_addresses": [], 00:14:10.280 "allow_any_host": true, 00:14:10.280 "hosts": [] 00:14:10.280 }, 00:14:10.280 { 00:14:10.280 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:10.280 "subtype": "NVMe", 00:14:10.280 "listen_addresses": [ 00:14:10.280 { 00:14:10.280 "trtype": "VFIOUSER", 00:14:10.280 "adrfam": "IPv4", 00:14:10.280 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:10.280 "trsvcid": "0" 00:14:10.280 } 00:14:10.280 ], 00:14:10.280 "allow_any_host": true, 00:14:10.280 "hosts": [], 00:14:10.280 "serial_number": "SPDK1", 00:14:10.280 "model_number": "SPDK bdev Controller", 00:14:10.280 "max_namespaces": 32, 00:14:10.280 "min_cntlid": 1, 00:14:10.280 "max_cntlid": 65519, 00:14:10.280 "namespaces": [ 00:14:10.280 { 00:14:10.280 "nsid": 1, 00:14:10.280 "bdev_name": "Malloc1", 00:14:10.280 "name": "Malloc1", 00:14:10.280 "nguid": "D023CE3E0452434BBD51DB2BF41D331B", 00:14:10.280 "uuid": "d023ce3e-0452-434b-bd51-db2bf41d331b" 00:14:10.280 }, 00:14:10.280 { 00:14:10.280 "nsid": 2, 00:14:10.280 "bdev_name": "Malloc3", 00:14:10.280 "name": "Malloc3", 00:14:10.280 "nguid": "B46BF12D29C14F5EB8FF178F9DBD713F", 00:14:10.280 "uuid": "b46bf12d-29c1-4f5e-b8ff-178f9dbd713f" 00:14:10.280 } 00:14:10.280 ] 00:14:10.280 }, 00:14:10.280 { 00:14:10.280 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:10.280 "subtype": "NVMe", 00:14:10.281 "listen_addresses": [ 00:14:10.281 { 00:14:10.281 "trtype": "VFIOUSER", 00:14:10.281 "adrfam": "IPv4", 00:14:10.281 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:10.281 "trsvcid": "0" 00:14:10.281 } 00:14:10.281 ], 00:14:10.281 "allow_any_host": true, 00:14:10.281 "hosts": [], 00:14:10.281 "serial_number": "SPDK2", 00:14:10.281 "model_number": "SPDK bdev Controller", 00:14:10.281 "max_namespaces": 32, 00:14:10.281 "min_cntlid": 1, 00:14:10.281 "max_cntlid": 65519, 00:14:10.281 "namespaces": [ 00:14:10.281 { 00:14:10.281 "nsid": 1, 00:14:10.281 "bdev_name": "Malloc2", 00:14:10.281 "name": "Malloc2", 00:14:10.281 "nguid": "F0B360D394364F9DBE53FCB7CEB6FC4C", 00:14:10.281 "uuid": "f0b360d3-9436-4f9d-be53-fcb7ceb6fc4c" 00:14:10.281 } 00:14:10.281 ] 00:14:10.281 } 00:14:10.281 ] 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1467007 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:10.281 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:10.281 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.540 [2024-06-10 11:21:07.568239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.540 Malloc4 00:14:10.540 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:10.798 [2024-06-10 11:21:07.842069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.798 11:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.798 Asynchronous Event Request test 00:14:10.798 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.798 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.798 Registering asynchronous event callbacks... 00:14:10.798 Starting namespace attribute notice tests for all controllers... 00:14:10.798 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:10.798 aer_cb - Changed Namespace 00:14:10.798 Cleaning up... 00:14:11.058 [ 00:14:11.058 { 00:14:11.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.058 "subtype": "Discovery", 00:14:11.058 "listen_addresses": [], 00:14:11.058 "allow_any_host": true, 00:14:11.058 "hosts": [] 00:14:11.058 }, 00:14:11.058 { 00:14:11.058 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.058 "subtype": "NVMe", 00:14:11.058 "listen_addresses": [ 00:14:11.058 { 00:14:11.058 "trtype": "VFIOUSER", 00:14:11.058 "adrfam": "IPv4", 00:14:11.058 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.058 "trsvcid": "0" 00:14:11.058 } 00:14:11.058 ], 00:14:11.058 "allow_any_host": true, 00:14:11.058 "hosts": [], 00:14:11.058 "serial_number": "SPDK1", 00:14:11.058 "model_number": "SPDK bdev Controller", 00:14:11.058 "max_namespaces": 32, 00:14:11.058 "min_cntlid": 1, 00:14:11.058 "max_cntlid": 65519, 00:14:11.058 "namespaces": [ 00:14:11.058 { 00:14:11.058 "nsid": 1, 00:14:11.058 "bdev_name": "Malloc1", 00:14:11.058 "name": "Malloc1", 00:14:11.058 "nguid": "D023CE3E0452434BBD51DB2BF41D331B", 00:14:11.058 "uuid": "d023ce3e-0452-434b-bd51-db2bf41d331b" 00:14:11.058 }, 00:14:11.058 { 00:14:11.058 "nsid": 2, 00:14:11.058 "bdev_name": "Malloc3", 00:14:11.058 "name": "Malloc3", 00:14:11.058 "nguid": "B46BF12D29C14F5EB8FF178F9DBD713F", 00:14:11.058 "uuid": "b46bf12d-29c1-4f5e-b8ff-178f9dbd713f" 00:14:11.058 } 00:14:11.058 ] 00:14:11.058 }, 00:14:11.058 { 00:14:11.058 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.058 "subtype": "NVMe", 00:14:11.058 "listen_addresses": [ 00:14:11.058 { 00:14:11.058 "trtype": "VFIOUSER", 00:14:11.058 "adrfam": "IPv4", 00:14:11.058 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.058 "trsvcid": "0" 00:14:11.058 } 00:14:11.058 ], 00:14:11.058 "allow_any_host": true, 00:14:11.058 "hosts": [], 00:14:11.058 "serial_number": "SPDK2", 00:14:11.058 "model_number": "SPDK bdev Controller", 00:14:11.058 "max_namespaces": 32, 00:14:11.058 "min_cntlid": 1, 00:14:11.058 "max_cntlid": 65519, 00:14:11.058 "namespaces": [ 00:14:11.058 { 00:14:11.058 "nsid": 1, 00:14:11.058 "bdev_name": "Malloc2", 00:14:11.058 "name": "Malloc2", 00:14:11.058 "nguid": "F0B360D394364F9DBE53FCB7CEB6FC4C", 00:14:11.058 "uuid": "f0b360d3-9436-4f9d-be53-fcb7ceb6fc4c" 00:14:11.058 }, 00:14:11.058 { 00:14:11.058 "nsid": 2, 00:14:11.058 "bdev_name": "Malloc4", 00:14:11.058 "name": "Malloc4", 00:14:11.058 "nguid": "7499DABD1EF54411A1149D675B85E604", 00:14:11.058 "uuid": "7499dabd-1ef5-4411-a114-9d675b85e604" 00:14:11.058 } 00:14:11.058 ] 00:14:11.058 } 00:14:11.058 ] 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1467007 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1459066 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1459066 ']' 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1459066 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1459066 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1459066' 00:14:11.058 killing process with pid 1459066 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1459066 00:14:11.058 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1459066 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1467150 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1467150' 00:14:11.318 Process pid: 1467150 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1467150 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1467150 ']' 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:11.318 11:21:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:11.318 [2024-06-10 11:21:08.347719] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:11.318 [2024-06-10 11:21:08.348584] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:14:11.318 [2024-06-10 11:21:08.348625] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.318 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.318 [2024-06-10 11:21:08.431007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.318 [2024-06-10 11:21:08.499268] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.318 [2024-06-10 11:21:08.499311] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.318 [2024-06-10 11:21:08.499318] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.318 [2024-06-10 11:21:08.499325] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.318 [2024-06-10 11:21:08.499334] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.318 [2024-06-10 11:21:08.499441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.318 [2024-06-10 11:21:08.499564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.318 [2024-06-10 11:21:08.499714] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.318 [2024-06-10 11:21:08.499715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.577 [2024-06-10 11:21:08.566044] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:11.577 [2024-06-10 11:21:08.566243] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:11.577 [2024-06-10 11:21:08.566741] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:11.577 [2024-06-10 11:21:08.567075] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:11.577 [2024-06-10 11:21:08.567189] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:12.147 11:21:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:12.147 11:21:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:14:12.147 11:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:13.085 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:13.344 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:13.344 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:13.344 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.345 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:13.345 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:13.604 Malloc1 00:14:13.604 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:13.604 11:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:13.864 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:14.123 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.123 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:14.123 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:14.382 Malloc2 00:14:14.382 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:14.641 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:14.641 11:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1467150 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1467150 ']' 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1467150 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1467150 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:14.901 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1467150' 00:14:14.902 killing process with pid 1467150 00:14:14.902 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1467150 00:14:14.902 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1467150 00:14:15.161 11:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:15.161 11:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:15.161 00:14:15.161 real 0m51.723s 00:14:15.161 user 3m25.453s 00:14:15.161 sys 0m3.207s 00:14:15.161 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:15.161 11:21:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 ************************************ 00:14:15.161 END TEST nvmf_vfio_user 00:14:15.161 ************************************ 00:14:15.161 11:21:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:15.161 11:21:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:15.161 11:21:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:15.161 11:21:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 ************************************ 00:14:15.161 START TEST nvmf_vfio_user_nvme_compliance 00:14:15.161 ************************************ 00:14:15.161 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:15.420 * Looking for test storage... 00:14:15.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.420 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1468452 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1468452' 00:14:15.421 Process pid: 1468452 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1468452 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 1468452 ']' 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:15.421 11:21:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.421 [2024-06-10 11:21:12.493544] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:14:15.421 [2024-06-10 11:21:12.493601] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.421 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.421 [2024-06-10 11:21:12.576996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.421 [2024-06-10 11:21:12.639520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.421 [2024-06-10 11:21:12.639557] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.421 [2024-06-10 11:21:12.639564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.421 [2024-06-10 11:21:12.639570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.421 [2024-06-10 11:21:12.639575] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.421 [2024-06-10 11:21:12.639616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.421 [2024-06-10 11:21:12.639936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.421 [2024-06-10 11:21:12.640036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.357 11:21:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:16.357 11:21:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:14:16.357 11:21:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 malloc0 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.298 11:21:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:17.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.298 00:14:17.298 00:14:17.298 CUnit - A unit testing framework for C - Version 2.1-3 00:14:17.298 http://cunit.sourceforge.net/ 00:14:17.298 00:14:17.298 00:14:17.298 Suite: nvme_compliance 00:14:17.558 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 11:21:14.564271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.558 [2024-06-10 11:21:14.565601] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:17.558 [2024-06-10 11:21:14.565614] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:17.558 [2024-06-10 11:21:14.565620] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:17.558 [2024-06-10 11:21:14.567289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.558 passed 00:14:17.558 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 11:21:14.659888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.558 [2024-06-10 11:21:14.662902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.558 passed 00:14:17.558 Test: admin_identify_ns ...[2024-06-10 11:21:14.752377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.819 [2024-06-10 11:21:14.811842] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:17.819 [2024-06-10 11:21:14.819831] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:17.819 [2024-06-10 11:21:14.840933] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.819 passed 00:14:17.819 Test: admin_get_features_mandatory_features ...[2024-06-10 11:21:14.931746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.819 [2024-06-10 11:21:14.934769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.819 passed 00:14:17.819 Test: admin_get_features_optional_features ...[2024-06-10 11:21:15.024275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.819 [2024-06-10 11:21:15.027287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.078 passed 00:14:18.079 Test: admin_set_features_number_of_queues ...[2024-06-10 11:21:15.114551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.079 [2024-06-10 11:21:15.222914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.079 passed 00:14:18.339 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 11:21:15.311235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.339 [2024-06-10 11:21:15.314252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.339 passed 00:14:18.339 Test: admin_get_log_page_with_lpo ...[2024-06-10 11:21:15.403555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.339 [2024-06-10 11:21:15.470831] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:18.339 [2024-06-10 11:21:15.483868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.339 passed 00:14:18.599 Test: fabric_property_get ...[2024-06-10 11:21:15.570183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.599 [2024-06-10 11:21:15.571434] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:18.599 [2024-06-10 11:21:15.573201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.599 passed 00:14:18.599 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 11:21:15.662732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.599 [2024-06-10 11:21:15.663975] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:18.599 [2024-06-10 11:21:15.667768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.599 passed 00:14:18.600 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 11:21:15.756375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.860 [2024-06-10 11:21:15.839831] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.860 [2024-06-10 11:21:15.855838] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.860 [2024-06-10 11:21:15.860920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.860 passed 00:14:18.860 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 11:21:15.953450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.860 [2024-06-10 11:21:15.954665] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:18.860 [2024-06-10 11:21:15.956463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.860 passed 00:14:18.860 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 11:21:16.046352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.119 [2024-06-10 11:21:16.121831] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.119 [2024-06-10 11:21:16.145826] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.119 [2024-06-10 11:21:16.150907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.119 passed 00:14:19.119 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 11:21:16.243083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.120 [2024-06-10 11:21:16.244304] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:19.120 [2024-06-10 11:21:16.244324] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:19.120 [2024-06-10 11:21:16.246108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.120 passed 00:14:19.120 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 11:21:16.337353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.379 [2024-06-10 11:21:16.428830] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:19.379 [2024-06-10 11:21:16.436831] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:19.379 [2024-06-10 11:21:16.444829] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:19.379 [2024-06-10 11:21:16.452827] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:19.379 [2024-06-10 11:21:16.481907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.379 passed 00:14:19.379 Test: admin_create_io_sq_verify_pc ...[2024-06-10 11:21:16.568189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.379 [2024-06-10 11:21:16.586836] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:19.672 [2024-06-10 11:21:16.604339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.672 passed 00:14:19.673 Test: admin_create_io_qp_max_qps ...[2024-06-10 11:21:16.697890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.633 [2024-06-10 11:21:17.792833] nvme_ctrlr.c:5384:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:21.248 [2024-06-10 11:21:18.169916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.248 passed 00:14:21.248 Test: admin_create_io_sq_shared_cq ...[2024-06-10 11:21:18.260285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.248 [2024-06-10 11:21:18.391827] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:21.248 [2024-06-10 11:21:18.428882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.248 passed 00:14:21.248 00:14:21.248 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.248 suites 1 1 n/a 0 0 00:14:21.248 tests 18 18 18 0 0 00:14:21.248 asserts 360 360 360 0 n/a 00:14:21.248 00:14:21.248 Elapsed time = 1.608 seconds 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 1468452 ']' 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1468452' 00:14:21.508 killing process with pid 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 1468452 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:21.508 00:14:21.508 real 0m6.340s 00:14:21.508 user 0m18.235s 00:14:21.508 sys 0m0.432s 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:21.508 11:21:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.508 ************************************ 00:14:21.508 END TEST nvmf_vfio_user_nvme_compliance 00:14:21.508 ************************************ 00:14:21.508 11:21:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.508 11:21:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:21.508 11:21:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:21.508 11:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.769 ************************************ 00:14:21.769 START TEST nvmf_vfio_user_fuzz 00:14:21.769 ************************************ 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.769 * Looking for test storage... 00:14:21.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:21.769 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1469447 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1469447' 00:14:21.770 Process pid: 1469447 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1469447 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1469447 ']' 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:21.770 11:21:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.707 11:21:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:22.707 11:21:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:14:22.707 11:21:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 malloc0 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:23.646 11:21:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:55.735 Fuzzing completed. Shutting down the fuzz application 00:14:55.735 00:14:55.735 Dumping successful admin opcodes: 00:14:55.735 8, 9, 10, 24, 00:14:55.735 Dumping successful io opcodes: 00:14:55.735 0, 00:14:55.735 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1067735, total successful commands: 4206, random_seed: 602235392 00:14:55.735 NS: 0x200003a1ef00 admin qp, Total commands completed: 265036, total successful commands: 2133, random_seed: 2690382016 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1469447 ']' 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1469447' 00:14:55.735 killing process with pid 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 1469447 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:55.735 00:14:55.735 real 0m32.767s 00:14:55.735 user 0m36.699s 00:14:55.735 sys 0m25.899s 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:55.735 11:21:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.735 ************************************ 00:14:55.735 END TEST nvmf_vfio_user_fuzz 00:14:55.735 ************************************ 00:14:55.735 11:21:51 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:55.735 11:21:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:55.735 11:21:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:55.735 11:21:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.735 ************************************ 00:14:55.735 START TEST nvmf_host_management 00:14:55.735 ************************************ 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:55.735 * Looking for test storage... 00:14:55.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.735 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.736 11:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:03.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:03.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:03.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:03.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.872 11:21:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.872 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.872 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:15:03.873 00:15:03.873 --- 10.0.0.2 ping statistics --- 00:15:03.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.873 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:15:03.873 00:15:03.873 --- 10.0.0.1 ping statistics --- 00:15:03.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.873 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1479051 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1479051 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1479051 ']' 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:03.873 11:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 [2024-06-10 11:22:00.152012] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:03.873 [2024-06-10 11:22:00.152079] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.873 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.873 [2024-06-10 11:22:00.225835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.873 [2024-06-10 11:22:00.297867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.873 [2024-06-10 11:22:00.297906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.873 [2024-06-10 11:22:00.297914] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.873 [2024-06-10 11:22:00.297921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.873 [2024-06-10 11:22:00.297926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.873 [2024-06-10 11:22:00.298034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.873 [2024-06-10 11:22:00.298187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.873 [2024-06-10 11:22:00.298339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.873 [2024-06-10 11:22:00.298341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 [2024-06-10 11:22:01.053462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.873 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:03.873 Malloc0 00:15:04.132 [2024-06-10 11:22:01.109491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1479120 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1479120 /var/tmp/bdevperf.sock 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1479120 ']' 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:04.132 { 00:15:04.132 "params": { 00:15:04.132 "name": "Nvme$subsystem", 00:15:04.132 "trtype": "$TEST_TRANSPORT", 00:15:04.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.132 "adrfam": "ipv4", 00:15:04.132 "trsvcid": "$NVMF_PORT", 00:15:04.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.132 "hdgst": ${hdgst:-false}, 00:15:04.132 "ddgst": ${ddgst:-false} 00:15:04.132 }, 00:15:04.132 "method": "bdev_nvme_attach_controller" 00:15:04.132 } 00:15:04.132 EOF 00:15:04.132 )") 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:04.132 11:22:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:04.132 "params": { 00:15:04.132 "name": "Nvme0", 00:15:04.132 "trtype": "tcp", 00:15:04.132 "traddr": "10.0.0.2", 00:15:04.132 "adrfam": "ipv4", 00:15:04.132 "trsvcid": "4420", 00:15:04.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:04.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:04.132 "hdgst": false, 00:15:04.132 "ddgst": false 00:15:04.132 }, 00:15:04.132 "method": "bdev_nvme_attach_controller" 00:15:04.132 }' 00:15:04.132 [2024-06-10 11:22:01.210199] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:04.132 [2024-06-10 11:22:01.210247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479120 ] 00:15:04.132 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.132 [2024-06-10 11:22:01.292676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.132 [2024-06-10 11:22:01.355749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.392 Running I/O for 10 seconds... 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.963 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 [2024-06-10 11:22:02.128878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409570 is same with the state(5) to be set 00:15:04.963 [2024-06-10 11:22:02.129427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.963 [2024-06-10 11:22:02.129465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.963 [2024-06-10 11:22:02.129484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.964 [2024-06-10 11:22:02.129825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.964 [2024-06-10 11:22:02.129832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.129987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.129994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.965 [2024-06-10 11:22:02.130097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.965 [2024-06-10 11:22:02.130104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.966 [2024-06-10 11:22:02.130416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.966 [2024-06-10 11:22:02.130423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.967 [2024-06-10 11:22:02.130432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.967 [2024-06-10 11:22:02.130439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.967 [2024-06-10 11:22:02.130448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.967 [2024-06-10 11:22:02.130454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.967 [2024-06-10 11:22:02.130463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.967 [2024-06-10 11:22:02.130470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.967 [2024-06-10 11:22:02.130478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.967 [2024-06-10 11:22:02.130486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.967 [2024-06-10 11:22:02.130512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:15:04.967 [2024-06-10 11:22:02.130552] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e93c0 was disconnected and freed. reset controller. 00:15:04.967 [2024-06-10 11:22:02.131646] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:04.967 task offset: 107264 on job bdev=Nvme0n1 fails 00:15:04.967 00:15:04.967 Latency(us) 00:15:04.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.967 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:04.967 Job: Nvme0n1 ended in about 0.52 seconds with error 00:15:04.967 Verification LBA range: start 0x0 length 0x400 00:15:04.967 Nvme0n1 : 0.52 1616.71 101.04 124.06 0.00 35879.11 1455.66 30650.68 00:15:04.967 =================================================================================================================== 00:15:04.967 Total : 1616.71 101.04 124.06 0.00 35879.11 1455.66 30650.68 00:15:04.967 [2024-06-10 11:22:02.133566] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:04.967 [2024-06-10 11:22:02.133587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b8300 (9): Bad file descriptor 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.967 [2024-06-10 11:22:02.144630] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.967 11:22:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1479120 00:15:06.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1479120) - No such process 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:06.348 { 00:15:06.348 "params": { 00:15:06.348 "name": "Nvme$subsystem", 00:15:06.348 "trtype": "$TEST_TRANSPORT", 00:15:06.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:06.348 "adrfam": "ipv4", 00:15:06.348 "trsvcid": "$NVMF_PORT", 00:15:06.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:06.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:06.348 "hdgst": ${hdgst:-false}, 00:15:06.348 "ddgst": ${ddgst:-false} 00:15:06.348 }, 00:15:06.348 "method": "bdev_nvme_attach_controller" 00:15:06.348 } 00:15:06.348 EOF 00:15:06.348 )") 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:06.348 11:22:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:06.348 "params": { 00:15:06.348 "name": "Nvme0", 00:15:06.348 "trtype": "tcp", 00:15:06.348 "traddr": "10.0.0.2", 00:15:06.348 "adrfam": "ipv4", 00:15:06.348 "trsvcid": "4420", 00:15:06.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:06.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:06.348 "hdgst": false, 00:15:06.348 "ddgst": false 00:15:06.348 }, 00:15:06.348 "method": "bdev_nvme_attach_controller" 00:15:06.348 }' 00:15:06.348 [2024-06-10 11:22:03.201407] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:06.348 [2024-06-10 11:22:03.201456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479446 ] 00:15:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.348 [2024-06-10 11:22:03.282373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.348 [2024-06-10 11:22:03.343548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.610 Running I/O for 1 seconds... 00:15:07.551 00:15:07.551 Latency(us) 00:15:07.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.551 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:07.551 Verification LBA range: start 0x0 length 0x400 00:15:07.551 Nvme0n1 : 1.01 1771.76 110.74 0.00 0.00 35464.21 1663.61 34683.67 00:15:07.551 =================================================================================================================== 00:15:07.551 Total : 1771.76 110.74 0.00 0.00 35464.21 1663.61 34683.67 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.551 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.551 rmmod nvme_tcp 00:15:07.812 rmmod nvme_fabrics 00:15:07.812 rmmod nvme_keyring 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1479051 ']' 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1479051 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1479051 ']' 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1479051 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1479051 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1479051' 00:15:07.812 killing process with pid 1479051 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1479051 00:15:07.812 11:22:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1479051 00:15:07.812 [2024-06-10 11:22:05.021655] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.073 11:22:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.990 11:22:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.990 11:22:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:09.990 00:15:09.990 real 0m15.547s 00:15:09.990 user 0m23.936s 00:15:09.990 sys 0m7.276s 00:15:09.990 11:22:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:09.990 11:22:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.990 ************************************ 00:15:09.990 END TEST nvmf_host_management 00:15:09.990 ************************************ 00:15:09.990 11:22:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:09.990 11:22:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:09.990 11:22:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:09.990 11:22:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.990 ************************************ 00:15:09.990 START TEST nvmf_lvol 00:15:09.990 ************************************ 00:15:09.990 11:22:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:10.251 * Looking for test storage... 00:15:10.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.251 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.252 11:22:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:18.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:18.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.413 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:18.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:18.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.414 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:15:18.674 00:15:18.674 --- 10.0.0.2 ping statistics --- 00:15:18.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.674 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:18.674 00:15:18.674 --- 10.0.0.1 ping statistics --- 00:15:18.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.674 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1484232 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1484232 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1484232 ']' 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:18.674 11:22:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:18.674 [2024-06-10 11:22:15.773326] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:18.674 [2024-06-10 11:22:15.773384] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.674 [2024-06-10 11:22:15.867703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.934 [2024-06-10 11:22:15.960270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.934 [2024-06-10 11:22:15.960330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.934 [2024-06-10 11:22:15.960338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.934 [2024-06-10 11:22:15.960344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.934 [2024-06-10 11:22:15.960350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.934 [2024-06-10 11:22:15.960480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.934 [2024-06-10 11:22:15.960608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.934 [2024-06-10 11:22:15.960610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.505 11:22:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.765 [2024-06-10 11:22:16.850146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.765 11:22:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.026 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:20.026 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.286 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:20.286 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:20.546 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:20.546 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d27d7df8-88f1-4ed1-bfe3-0d7eca887fa7 00:15:20.546 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d27d7df8-88f1-4ed1-bfe3-0d7eca887fa7 lvol 20 00:15:20.807 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bd490096-ebef-4d6d-953f-2ea7a86eeb4d 00:15:20.807 11:22:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:21.067 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bd490096-ebef-4d6d-953f-2ea7a86eeb4d 00:15:21.327 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:21.327 [2024-06-10 11:22:18.520550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.587 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.587 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1484863 00:15:21.587 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:21.587 11:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:21.587 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.527 11:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bd490096-ebef-4d6d-953f-2ea7a86eeb4d MY_SNAPSHOT 00:15:22.786 11:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=aeb42b05-b0b1-4467-9ea6-eae8cd98ccaa 00:15:22.787 11:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bd490096-ebef-4d6d-953f-2ea7a86eeb4d 30 00:15:23.047 11:22:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone aeb42b05-b0b1-4467-9ea6-eae8cd98ccaa MY_CLONE 00:15:23.308 11:22:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b6c35d3b-bb22-4acc-8666-ab2932ca365f 00:15:23.308 11:22:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b6c35d3b-bb22-4acc-8666-ab2932ca365f 00:15:23.880 11:22:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1484863 00:15:32.013 Initializing NVMe Controllers 00:15:32.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:32.013 Controller IO queue size 128, less than required. 00:15:32.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:32.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:32.013 Initialization complete. Launching workers. 00:15:32.013 ======================================================== 00:15:32.013 Latency(us) 00:15:32.013 Device Information : IOPS MiB/s Average min max 00:15:32.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12967.40 50.65 9874.28 1359.23 51014.07 00:15:32.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13254.20 51.77 9659.22 1193.05 51714.74 00:15:32.013 ======================================================== 00:15:32.013 Total : 26221.60 102.43 9765.58 1193.05 51714.74 00:15:32.013 00:15:32.013 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:32.272 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bd490096-ebef-4d6d-953f-2ea7a86eeb4d 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d27d7df8-88f1-4ed1-bfe3-0d7eca887fa7 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.532 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.792 rmmod nvme_tcp 00:15:32.792 rmmod nvme_fabrics 00:15:32.792 rmmod nvme_keyring 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1484232 ']' 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1484232 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1484232 ']' 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1484232 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1484232 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:32.792 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1484232' 00:15:32.793 killing process with pid 1484232 00:15:32.793 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1484232 00:15:32.793 11:22:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1484232 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.793 11:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.335 00:15:35.335 real 0m24.890s 00:15:35.335 user 1m6.166s 00:15:35.335 sys 0m8.726s 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 ************************************ 00:15:35.335 END TEST nvmf_lvol 00:15:35.335 ************************************ 00:15:35.335 11:22:32 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:35.335 11:22:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:35.335 11:22:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:35.335 11:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 ************************************ 00:15:35.335 START TEST nvmf_lvs_grow 00:15:35.335 ************************************ 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:35.335 * Looking for test storage... 00:15:35.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.335 11:22:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.336 11:22:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:43.477 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:43.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:43.478 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:43.478 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:43.478 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:15:43.478 00:15:43.478 --- 10.0.0.2 ping statistics --- 00:15:43.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.478 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:15:43.478 00:15:43.478 --- 10.0.0.1 ping statistics --- 00:15:43.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.478 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:43.478 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1491056 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1491056 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1491056 ']' 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:43.479 11:22:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:43.479 [2024-06-10 11:22:40.628076] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:43.479 [2024-06-10 11:22:40.628127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.479 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.739 [2024-06-10 11:22:40.718457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.739 [2024-06-10 11:22:40.812230] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.739 [2024-06-10 11:22:40.812294] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.739 [2024-06-10 11:22:40.812302] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.739 [2024-06-10 11:22:40.812309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.739 [2024-06-10 11:22:40.812315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.739 [2024-06-10 11:22:40.812343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.309 11:22:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.571 [2024-06-10 11:22:41.715320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:44.571 ************************************ 00:15:44.571 START TEST lvs_grow_clean 00:15:44.571 ************************************ 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:44.571 11:22:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:44.832 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:44.832 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:45.092 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:45.092 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:45.092 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:45.353 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:45.353 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:45.353 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 lvol 150 00:15:45.613 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6aa85404-2ec9-4ecb-a325-b7a5652ab03a 00:15:45.613 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:45.613 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:45.613 [2024-06-10 11:22:42.827467] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:45.613 [2024-06-10 11:22:42.827538] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:45.613 true 00:15:45.873 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:45.873 11:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:45.873 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:45.873 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:46.134 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6aa85404-2ec9-4ecb-a325-b7a5652ab03a 00:15:46.394 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:46.654 [2024-06-10 11:22:43.645957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1491566 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1491566 /var/tmp/bdevperf.sock 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1491566 ']' 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:46.654 11:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:46.914 [2024-06-10 11:22:43.918999] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:15:46.914 [2024-06-10 11:22:43.919068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491566 ] 00:15:46.914 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.914 [2024-06-10 11:22:43.985459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.914 [2024-06-10 11:22:44.056232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.919 11:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:47.919 11:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:15:47.919 11:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:47.919 Nvme0n1 00:15:47.919 11:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:48.205 [ 00:15:48.205 { 00:15:48.205 "name": "Nvme0n1", 00:15:48.205 "aliases": [ 00:15:48.205 "6aa85404-2ec9-4ecb-a325-b7a5652ab03a" 00:15:48.205 ], 00:15:48.205 "product_name": "NVMe disk", 00:15:48.205 "block_size": 4096, 00:15:48.205 "num_blocks": 38912, 00:15:48.205 "uuid": "6aa85404-2ec9-4ecb-a325-b7a5652ab03a", 00:15:48.205 "assigned_rate_limits": { 00:15:48.205 "rw_ios_per_sec": 0, 00:15:48.205 "rw_mbytes_per_sec": 0, 00:15:48.205 "r_mbytes_per_sec": 0, 00:15:48.205 "w_mbytes_per_sec": 0 00:15:48.205 }, 00:15:48.205 "claimed": false, 00:15:48.205 "zoned": false, 00:15:48.205 "supported_io_types": { 00:15:48.205 "read": true, 00:15:48.205 "write": true, 00:15:48.205 "unmap": true, 00:15:48.205 "write_zeroes": true, 00:15:48.205 "flush": true, 00:15:48.205 "reset": true, 00:15:48.205 "compare": true, 00:15:48.205 "compare_and_write": true, 00:15:48.205 "abort": true, 00:15:48.205 "nvme_admin": true, 00:15:48.205 "nvme_io": true 00:15:48.205 }, 00:15:48.205 "memory_domains": [ 00:15:48.205 { 00:15:48.205 "dma_device_id": "system", 00:15:48.205 "dma_device_type": 1 00:15:48.205 } 00:15:48.205 ], 00:15:48.205 "driver_specific": { 00:15:48.205 "nvme": [ 00:15:48.205 { 00:15:48.205 "trid": { 00:15:48.205 "trtype": "TCP", 00:15:48.205 "adrfam": "IPv4", 00:15:48.205 "traddr": "10.0.0.2", 00:15:48.205 "trsvcid": "4420", 00:15:48.205 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:48.205 }, 00:15:48.205 "ctrlr_data": { 00:15:48.205 "cntlid": 1, 00:15:48.205 "vendor_id": "0x8086", 00:15:48.205 "model_number": "SPDK bdev Controller", 00:15:48.205 "serial_number": "SPDK0", 00:15:48.205 "firmware_revision": "24.09", 00:15:48.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:48.205 "oacs": { 00:15:48.205 "security": 0, 00:15:48.205 "format": 0, 00:15:48.205 "firmware": 0, 00:15:48.205 "ns_manage": 0 00:15:48.205 }, 00:15:48.205 "multi_ctrlr": true, 00:15:48.205 "ana_reporting": false 00:15:48.205 }, 00:15:48.205 "vs": { 00:15:48.205 "nvme_version": "1.3" 00:15:48.205 }, 00:15:48.205 "ns_data": { 00:15:48.205 "id": 1, 00:15:48.205 "can_share": true 00:15:48.205 } 00:15:48.205 } 00:15:48.205 ], 00:15:48.205 "mp_policy": "active_passive" 00:15:48.205 } 00:15:48.205 } 00:15:48.205 ] 00:15:48.205 11:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1491857 00:15:48.205 11:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:48.205 11:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:48.205 Running I/O for 10 seconds... 00:15:49.589 Latency(us) 00:15:49.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.589 Nvme0n1 : 1.00 19456.00 76.00 0.00 0.00 0.00 0.00 0.00 00:15:49.589 =================================================================================================================== 00:15:49.589 Total : 19456.00 76.00 0.00 0.00 0.00 0.00 0.00 00:15:49.589 00:15:50.160 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:50.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.420 Nvme0n1 : 2.00 19518.50 76.24 0.00 0.00 0.00 0.00 0.00 00:15:50.420 =================================================================================================================== 00:15:50.420 Total : 19518.50 76.24 0.00 0.00 0.00 0.00 0.00 00:15:50.420 00:15:50.420 true 00:15:50.420 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:50.421 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:50.680 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:50.680 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:50.680 11:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1491857 00:15:51.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.250 Nvme0n1 : 3.00 19538.67 76.32 0.00 0.00 0.00 0.00 0.00 00:15:51.251 =================================================================================================================== 00:15:51.251 Total : 19538.67 76.32 0.00 0.00 0.00 0.00 0.00 00:15:51.251 00:15:52.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.631 Nvme0n1 : 4.00 19582.25 76.49 0.00 0.00 0.00 0.00 0.00 00:15:52.631 =================================================================================================================== 00:15:52.631 Total : 19582.25 76.49 0.00 0.00 0.00 0.00 0.00 00:15:52.631 00:15:53.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.572 Nvme0n1 : 5.00 19598.80 76.56 0.00 0.00 0.00 0.00 0.00 00:15:53.572 =================================================================================================================== 00:15:53.572 Total : 19598.80 76.56 0.00 0.00 0.00 0.00 0.00 00:15:53.572 00:15:54.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.513 Nvme0n1 : 6.00 19624.83 76.66 0.00 0.00 0.00 0.00 0.00 00:15:54.513 =================================================================================================================== 00:15:54.513 Total : 19624.83 76.66 0.00 0.00 0.00 0.00 0.00 00:15:54.513 00:15:55.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.454 Nvme0n1 : 7.00 19637.14 76.71 0.00 0.00 0.00 0.00 0.00 00:15:55.454 =================================================================================================================== 00:15:55.454 Total : 19637.14 76.71 0.00 0.00 0.00 0.00 0.00 00:15:55.454 00:15:56.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.395 Nvme0n1 : 8.00 19646.50 76.74 0.00 0.00 0.00 0.00 0.00 00:15:56.395 =================================================================================================================== 00:15:56.395 Total : 19646.50 76.74 0.00 0.00 0.00 0.00 0.00 00:15:56.395 00:15:57.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.334 Nvme0n1 : 9.00 19660.33 76.80 0.00 0.00 0.00 0.00 0.00 00:15:57.334 =================================================================================================================== 00:15:57.334 Total : 19660.33 76.80 0.00 0.00 0.00 0.00 0.00 00:15:57.334 00:15:58.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.273 Nvme0n1 : 10.00 19671.90 76.84 0.00 0.00 0.00 0.00 0.00 00:15:58.274 =================================================================================================================== 00:15:58.274 Total : 19671.90 76.84 0.00 0.00 0.00 0.00 0.00 00:15:58.274 00:15:58.274 00:15:58.274 Latency(us) 00:15:58.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.274 Nvme0n1 : 10.01 19671.08 76.84 0.00 0.00 6502.94 1928.27 11241.94 00:15:58.274 =================================================================================================================== 00:15:58.274 Total : 19671.08 76.84 0.00 0.00 6502.94 1928.27 11241.94 00:15:58.274 0 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1491566 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1491566 ']' 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1491566 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:58.274 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491566 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491566' 00:15:58.533 killing process with pid 1491566 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1491566 00:15:58.533 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.533 00:15:58.533 Latency(us) 00:15:58.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.533 =================================================================================================================== 00:15:58.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1491566 00:15:58.533 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.792 11:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:59.052 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:59.052 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:59.052 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:59.052 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:59.052 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:59.312 [2024-06-10 11:22:56.402491] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:59.312 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:15:59.572 request: 00:15:59.572 { 00:15:59.572 "uuid": "cb2ca24d-af2f-4eea-8d5e-91d1957c25e5", 00:15:59.572 "method": "bdev_lvol_get_lvstores", 00:15:59.572 "req_id": 1 00:15:59.572 } 00:15:59.572 Got JSON-RPC error response 00:15:59.572 response: 00:15:59.572 { 00:15:59.572 "code": -19, 00:15:59.572 "message": "No such device" 00:15:59.572 } 00:15:59.572 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:15:59.572 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:59.572 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:59.572 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:59.572 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:59.833 aio_bdev 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6aa85404-2ec9-4ecb-a325-b7a5652ab03a 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=6aa85404-2ec9-4ecb-a325-b7a5652ab03a 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:59.833 11:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6aa85404-2ec9-4ecb-a325-b7a5652ab03a -t 2000 00:16:00.094 [ 00:16:00.094 { 00:16:00.094 "name": "6aa85404-2ec9-4ecb-a325-b7a5652ab03a", 00:16:00.094 "aliases": [ 00:16:00.094 "lvs/lvol" 00:16:00.094 ], 00:16:00.094 "product_name": "Logical Volume", 00:16:00.094 "block_size": 4096, 00:16:00.094 "num_blocks": 38912, 00:16:00.094 "uuid": "6aa85404-2ec9-4ecb-a325-b7a5652ab03a", 00:16:00.094 "assigned_rate_limits": { 00:16:00.094 "rw_ios_per_sec": 0, 00:16:00.094 "rw_mbytes_per_sec": 0, 00:16:00.094 "r_mbytes_per_sec": 0, 00:16:00.094 "w_mbytes_per_sec": 0 00:16:00.094 }, 00:16:00.094 "claimed": false, 00:16:00.094 "zoned": false, 00:16:00.094 "supported_io_types": { 00:16:00.094 "read": true, 00:16:00.094 "write": true, 00:16:00.094 "unmap": true, 00:16:00.094 "write_zeroes": true, 00:16:00.094 "flush": false, 00:16:00.094 "reset": true, 00:16:00.094 "compare": false, 00:16:00.094 "compare_and_write": false, 00:16:00.094 "abort": false, 00:16:00.094 "nvme_admin": false, 00:16:00.094 "nvme_io": false 00:16:00.094 }, 00:16:00.094 "driver_specific": { 00:16:00.094 "lvol": { 00:16:00.094 "lvol_store_uuid": "cb2ca24d-af2f-4eea-8d5e-91d1957c25e5", 00:16:00.094 "base_bdev": "aio_bdev", 00:16:00.094 "thin_provision": false, 00:16:00.094 "num_allocated_clusters": 38, 00:16:00.094 "snapshot": false, 00:16:00.094 "clone": false, 00:16:00.094 "esnap_clone": false 00:16:00.094 } 00:16:00.094 } 00:16:00.094 } 00:16:00.094 ] 00:16:00.094 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:16:00.094 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:16:00.094 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:00.354 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:00.354 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:16:00.354 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:00.613 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:00.614 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6aa85404-2ec9-4ecb-a325-b7a5652ab03a 00:16:00.614 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb2ca24d-af2f-4eea-8d5e-91d1957c25e5 00:16:00.873 11:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.133 00:16:01.133 real 0m16.398s 00:16:01.133 user 0m16.248s 00:16:01.133 sys 0m1.345s 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:01.133 ************************************ 00:16:01.133 END TEST lvs_grow_clean 00:16:01.133 ************************************ 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:01.133 ************************************ 00:16:01.133 START TEST lvs_grow_dirty 00:16:01.133 ************************************ 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.133 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:01.393 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:01.393 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:01.652 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:01.653 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:01.653 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:01.653 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:01.653 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:01.653 11:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 lvol 150 00:16:01.912 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:01.912 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.912 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:02.172 [2024-06-10 11:22:59.214694] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:02.172 [2024-06-10 11:22:59.214746] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:02.172 true 00:16:02.172 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:02.172 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:02.433 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:02.433 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:02.433 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:02.693 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:02.953 [2024-06-10 11:22:59.972930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.953 11:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:02.953 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1494361 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1494361 /var/tmp/bdevperf.sock 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1494361 ']' 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:02.954 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:03.214 [2024-06-10 11:23:00.222871] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:03.214 [2024-06-10 11:23:00.222921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494361 ] 00:16:03.214 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.214 [2024-06-10 11:23:00.284967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.214 [2024-06-10 11:23:00.346188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.214 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:03.214 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:03.214 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:03.784 Nvme0n1 00:16:03.784 11:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:04.044 [ 00:16:04.044 { 00:16:04.044 "name": "Nvme0n1", 00:16:04.044 "aliases": [ 00:16:04.044 "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f" 00:16:04.044 ], 00:16:04.044 "product_name": "NVMe disk", 00:16:04.044 "block_size": 4096, 00:16:04.044 "num_blocks": 38912, 00:16:04.044 "uuid": "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f", 00:16:04.044 "assigned_rate_limits": { 00:16:04.044 "rw_ios_per_sec": 0, 00:16:04.044 "rw_mbytes_per_sec": 0, 00:16:04.044 "r_mbytes_per_sec": 0, 00:16:04.044 "w_mbytes_per_sec": 0 00:16:04.044 }, 00:16:04.044 "claimed": false, 00:16:04.044 "zoned": false, 00:16:04.044 "supported_io_types": { 00:16:04.044 "read": true, 00:16:04.044 "write": true, 00:16:04.044 "unmap": true, 00:16:04.044 "write_zeroes": true, 00:16:04.044 "flush": true, 00:16:04.044 "reset": true, 00:16:04.044 "compare": true, 00:16:04.044 "compare_and_write": true, 00:16:04.044 "abort": true, 00:16:04.044 "nvme_admin": true, 00:16:04.044 "nvme_io": true 00:16:04.044 }, 00:16:04.044 "memory_domains": [ 00:16:04.044 { 00:16:04.044 "dma_device_id": "system", 00:16:04.044 "dma_device_type": 1 00:16:04.044 } 00:16:04.044 ], 00:16:04.044 "driver_specific": { 00:16:04.044 "nvme": [ 00:16:04.044 { 00:16:04.044 "trid": { 00:16:04.044 "trtype": "TCP", 00:16:04.044 "adrfam": "IPv4", 00:16:04.044 "traddr": "10.0.0.2", 00:16:04.044 "trsvcid": "4420", 00:16:04.044 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:04.044 }, 00:16:04.044 "ctrlr_data": { 00:16:04.044 "cntlid": 1, 00:16:04.044 "vendor_id": "0x8086", 00:16:04.044 "model_number": "SPDK bdev Controller", 00:16:04.044 "serial_number": "SPDK0", 00:16:04.044 "firmware_revision": "24.09", 00:16:04.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:04.044 "oacs": { 00:16:04.044 "security": 0, 00:16:04.044 "format": 0, 00:16:04.044 "firmware": 0, 00:16:04.044 "ns_manage": 0 00:16:04.044 }, 00:16:04.044 "multi_ctrlr": true, 00:16:04.044 "ana_reporting": false 00:16:04.044 }, 00:16:04.044 "vs": { 00:16:04.044 "nvme_version": "1.3" 00:16:04.044 }, 00:16:04.044 "ns_data": { 00:16:04.044 "id": 1, 00:16:04.044 "can_share": true 00:16:04.044 } 00:16:04.044 } 00:16:04.044 ], 00:16:04.044 "mp_policy": "active_passive" 00:16:04.044 } 00:16:04.044 } 00:16:04.044 ] 00:16:04.044 11:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1494601 00:16:04.044 11:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:04.044 11:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:04.044 Running I/O for 10 seconds... 00:16:04.984 Latency(us) 00:16:04.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.984 Nvme0n1 : 1.00 19573.00 76.46 0.00 0.00 0.00 0.00 0.00 00:16:04.984 =================================================================================================================== 00:16:04.984 Total : 19573.00 76.46 0.00 0.00 0.00 0.00 0.00 00:16:04.984 00:16:05.924 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:05.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.924 Nvme0n1 : 2.00 19634.50 76.70 0.00 0.00 0.00 0.00 0.00 00:16:05.924 =================================================================================================================== 00:16:05.924 Total : 19634.50 76.70 0.00 0.00 0.00 0.00 0.00 00:16:05.924 00:16:06.236 true 00:16:06.236 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:06.236 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:06.236 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:06.236 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:06.236 11:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1494601 00:16:07.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.176 Nvme0n1 : 3.00 19675.00 76.86 0.00 0.00 0.00 0.00 0.00 00:16:07.176 =================================================================================================================== 00:16:07.176 Total : 19675.00 76.86 0.00 0.00 0.00 0.00 0.00 00:16:07.176 00:16:08.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.117 Nvme0n1 : 4.00 19698.50 76.95 0.00 0.00 0.00 0.00 0.00 00:16:08.117 =================================================================================================================== 00:16:08.117 Total : 19698.50 76.95 0.00 0.00 0.00 0.00 0.00 00:16:08.117 00:16:09.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.058 Nvme0n1 : 5.00 19723.20 77.04 0.00 0.00 0.00 0.00 0.00 00:16:09.058 =================================================================================================================== 00:16:09.058 Total : 19723.20 77.04 0.00 0.00 0.00 0.00 0.00 00:16:09.058 00:16:10.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.060 Nvme0n1 : 6.00 19741.00 77.11 0.00 0.00 0.00 0.00 0.00 00:16:10.060 =================================================================================================================== 00:16:10.060 Total : 19741.00 77.11 0.00 0.00 0.00 0.00 0.00 00:16:10.060 00:16:11.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.002 Nvme0n1 : 7.00 19761.86 77.19 0.00 0.00 0.00 0.00 0.00 00:16:11.002 =================================================================================================================== 00:16:11.002 Total : 19761.86 77.19 0.00 0.00 0.00 0.00 0.00 00:16:11.002 00:16:11.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.945 Nvme0n1 : 8.00 19769.12 77.22 0.00 0.00 0.00 0.00 0.00 00:16:11.945 =================================================================================================================== 00:16:11.945 Total : 19769.12 77.22 0.00 0.00 0.00 0.00 0.00 00:16:11.945 00:16:13.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.341 Nvme0n1 : 9.00 19778.44 77.26 0.00 0.00 0.00 0.00 0.00 00:16:13.341 =================================================================================================================== 00:16:13.341 Total : 19778.44 77.26 0.00 0.00 0.00 0.00 0.00 00:16:13.341 00:16:14.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.282 Nvme0n1 : 10.00 19787.30 77.29 0.00 0.00 0.00 0.00 0.00 00:16:14.282 =================================================================================================================== 00:16:14.282 Total : 19787.30 77.29 0.00 0.00 0.00 0.00 0.00 00:16:14.282 00:16:14.282 00:16:14.282 Latency(us) 00:16:14.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.282 Nvme0n1 : 10.00 19791.15 77.31 0.00 0.00 6464.85 1966.08 11342.77 00:16:14.282 =================================================================================================================== 00:16:14.282 Total : 19791.15 77.31 0.00 0.00 6464.85 1966.08 11342.77 00:16:14.282 0 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1494361 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1494361 ']' 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1494361 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1494361 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1494361' 00:16:14.282 killing process with pid 1494361 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1494361 00:16:14.282 Received shutdown signal, test time was about 10.000000 seconds 00:16:14.282 00:16:14.282 Latency(us) 00:16:14.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.282 =================================================================================================================== 00:16:14.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1494361 00:16:14.282 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.543 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:14.543 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:14.543 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1491056 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1491056 00:16:14.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1491056 Killed "${NVMF_APP[@]}" "$@" 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1496233 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1496233 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1496233 ']' 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:14.805 11:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:14.805 [2024-06-10 11:23:11.964012] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:14.805 [2024-06-10 11:23:11.964068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.805 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.066 [2024-06-10 11:23:12.052565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.066 [2024-06-10 11:23:12.117575] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.066 [2024-06-10 11:23:12.117610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.066 [2024-06-10 11:23:12.117616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.066 [2024-06-10 11:23:12.117622] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.066 [2024-06-10 11:23:12.117627] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.066 [2024-06-10 11:23:12.117645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.636 11:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:15.897 [2024-06-10 11:23:13.020610] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:15.897 [2024-06-10 11:23:13.020694] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:15.897 [2024-06-10 11:23:13.020720] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:15.897 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:16.158 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f -t 2000 00:16:16.418 [ 00:16:16.418 { 00:16:16.418 "name": "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f", 00:16:16.418 "aliases": [ 00:16:16.418 "lvs/lvol" 00:16:16.418 ], 00:16:16.418 "product_name": "Logical Volume", 00:16:16.418 "block_size": 4096, 00:16:16.418 "num_blocks": 38912, 00:16:16.418 "uuid": "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f", 00:16:16.418 "assigned_rate_limits": { 00:16:16.418 "rw_ios_per_sec": 0, 00:16:16.418 "rw_mbytes_per_sec": 0, 00:16:16.418 "r_mbytes_per_sec": 0, 00:16:16.418 "w_mbytes_per_sec": 0 00:16:16.418 }, 00:16:16.418 "claimed": false, 00:16:16.418 "zoned": false, 00:16:16.418 "supported_io_types": { 00:16:16.418 "read": true, 00:16:16.418 "write": true, 00:16:16.418 "unmap": true, 00:16:16.418 "write_zeroes": true, 00:16:16.418 "flush": false, 00:16:16.418 "reset": true, 00:16:16.418 "compare": false, 00:16:16.418 "compare_and_write": false, 00:16:16.419 "abort": false, 00:16:16.419 "nvme_admin": false, 00:16:16.419 "nvme_io": false 00:16:16.419 }, 00:16:16.419 "driver_specific": { 00:16:16.419 "lvol": { 00:16:16.419 "lvol_store_uuid": "bb5ad451-b52f-4a41-ae91-7e1dc4c398b5", 00:16:16.419 "base_bdev": "aio_bdev", 00:16:16.419 "thin_provision": false, 00:16:16.419 "num_allocated_clusters": 38, 00:16:16.419 "snapshot": false, 00:16:16.419 "clone": false, 00:16:16.419 "esnap_clone": false 00:16:16.419 } 00:16:16.419 } 00:16:16.419 } 00:16:16.419 ] 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:16.419 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:16.678 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:16.679 11:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:16.939 [2024-06-10 11:23:13.985201] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:16.939 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:17.199 request: 00:16:17.199 { 00:16:17.199 "uuid": "bb5ad451-b52f-4a41-ae91-7e1dc4c398b5", 00:16:17.199 "method": "bdev_lvol_get_lvstores", 00:16:17.199 "req_id": 1 00:16:17.199 } 00:16:17.199 Got JSON-RPC error response 00:16:17.199 response: 00:16:17.199 { 00:16:17.199 "code": -19, 00:16:17.199 "message": "No such device" 00:16:17.199 } 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:17.199 aio_bdev 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:17.199 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:17.460 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f -t 2000 00:16:17.720 [ 00:16:17.720 { 00:16:17.720 "name": "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f", 00:16:17.720 "aliases": [ 00:16:17.720 "lvs/lvol" 00:16:17.720 ], 00:16:17.720 "product_name": "Logical Volume", 00:16:17.720 "block_size": 4096, 00:16:17.720 "num_blocks": 38912, 00:16:17.720 "uuid": "691f3574-7ba7-4fc5-86bb-f61aa3da5e1f", 00:16:17.720 "assigned_rate_limits": { 00:16:17.720 "rw_ios_per_sec": 0, 00:16:17.720 "rw_mbytes_per_sec": 0, 00:16:17.720 "r_mbytes_per_sec": 0, 00:16:17.720 "w_mbytes_per_sec": 0 00:16:17.720 }, 00:16:17.720 "claimed": false, 00:16:17.720 "zoned": false, 00:16:17.720 "supported_io_types": { 00:16:17.720 "read": true, 00:16:17.720 "write": true, 00:16:17.720 "unmap": true, 00:16:17.720 "write_zeroes": true, 00:16:17.720 "flush": false, 00:16:17.720 "reset": true, 00:16:17.720 "compare": false, 00:16:17.720 "compare_and_write": false, 00:16:17.720 "abort": false, 00:16:17.720 "nvme_admin": false, 00:16:17.720 "nvme_io": false 00:16:17.720 }, 00:16:17.720 "driver_specific": { 00:16:17.720 "lvol": { 00:16:17.720 "lvol_store_uuid": "bb5ad451-b52f-4a41-ae91-7e1dc4c398b5", 00:16:17.720 "base_bdev": "aio_bdev", 00:16:17.720 "thin_provision": false, 00:16:17.720 "num_allocated_clusters": 38, 00:16:17.720 "snapshot": false, 00:16:17.720 "clone": false, 00:16:17.720 "esnap_clone": false 00:16:17.720 } 00:16:17.720 } 00:16:17.720 } 00:16:17.720 ] 00:16:17.720 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:17.720 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:17.720 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:17.980 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:17.980 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:17.980 11:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:17.980 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:17.980 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 691f3574-7ba7-4fc5-86bb-f61aa3da5e1f 00:16:18.240 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb5ad451-b52f-4a41-ae91-7e1dc4c398b5 00:16:18.505 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:18.766 00:16:18.766 real 0m17.523s 00:16:18.766 user 0m46.052s 00:16:18.766 sys 0m3.056s 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:18.766 ************************************ 00:16:18.766 END TEST lvs_grow_dirty 00:16:18.766 ************************************ 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:18.766 nvmf_trace.0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.766 rmmod nvme_tcp 00:16:18.766 rmmod nvme_fabrics 00:16:18.766 rmmod nvme_keyring 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1496233 ']' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1496233 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1496233 ']' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1496233 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:18.766 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1496233 00:16:19.027 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:19.027 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:19.027 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1496233' 00:16:19.027 killing process with pid 1496233 00:16:19.027 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1496233 00:16:19.027 11:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1496233 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.027 11:23:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.983 11:23:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:20.983 00:16:20.983 real 0m46.039s 00:16:20.983 user 1m9.249s 00:16:20.984 sys 0m11.011s 00:16:20.984 11:23:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:20.984 11:23:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:20.984 ************************************ 00:16:20.984 END TEST nvmf_lvs_grow 00:16:20.984 ************************************ 00:16:21.245 11:23:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:21.245 11:23:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:21.245 11:23:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:21.245 11:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.245 ************************************ 00:16:21.245 START TEST nvmf_bdev_io_wait 00:16:21.245 ************************************ 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:21.245 * Looking for test storage... 00:16:21.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.245 11:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:29.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.379 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:29.379 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:29.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:29.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:29.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:16:29.380 00:16:29.380 --- 10.0.0.2 ping statistics --- 00:16:29.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.380 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:16:29.380 00:16:29.380 --- 10.0.0.1 ping statistics --- 00:16:29.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.380 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1501375 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1501375 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1501375 ']' 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.380 11:23:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:29.380 [2024-06-10 11:23:26.465893] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:29.380 [2024-06-10 11:23:26.465954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.380 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.380 [2024-06-10 11:23:26.558543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.640 [2024-06-10 11:23:26.652879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.640 [2024-06-10 11:23:26.652938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.640 [2024-06-10 11:23:26.652946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.640 [2024-06-10 11:23:26.652953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.640 [2024-06-10 11:23:26.652959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.640 [2024-06-10 11:23:26.653087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.640 [2024-06-10 11:23:26.653216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.640 [2024-06-10 11:23:26.653378] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.640 [2024-06-10 11:23:26.653378] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.209 [2024-06-10 11:23:27.412202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.209 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.469 Malloc0 00:16:30.469 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.469 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.469 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.469 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.469 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 [2024-06-10 11:23:27.481913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1501693 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1501695 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.470 { 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme$subsystem", 00:16:30.470 "trtype": "$TEST_TRANSPORT", 00:16:30.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "$NVMF_PORT", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.470 "hdgst": ${hdgst:-false}, 00:16:30.470 "ddgst": ${ddgst:-false} 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 } 00:16:30.470 EOF 00:16:30.470 )") 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1501697 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.470 { 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme$subsystem", 00:16:30.470 "trtype": "$TEST_TRANSPORT", 00:16:30.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "$NVMF_PORT", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.470 "hdgst": ${hdgst:-false}, 00:16:30.470 "ddgst": ${ddgst:-false} 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 } 00:16:30.470 EOF 00:16:30.470 )") 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1501699 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.470 { 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme$subsystem", 00:16:30.470 "trtype": "$TEST_TRANSPORT", 00:16:30.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "$NVMF_PORT", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.470 "hdgst": ${hdgst:-false}, 00:16:30.470 "ddgst": ${ddgst:-false} 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 } 00:16:30.470 EOF 00:16:30.470 )") 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.470 { 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme$subsystem", 00:16:30.470 "trtype": "$TEST_TRANSPORT", 00:16:30.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "$NVMF_PORT", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.470 "hdgst": ${hdgst:-false}, 00:16:30.470 "ddgst": ${ddgst:-false} 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 } 00:16:30.470 EOF 00:16:30.470 )") 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1501693 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme1", 00:16:30.470 "trtype": "tcp", 00:16:30.470 "traddr": "10.0.0.2", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "4420", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.470 "hdgst": false, 00:16:30.470 "ddgst": false 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 }' 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme1", 00:16:30.470 "trtype": "tcp", 00:16:30.470 "traddr": "10.0.0.2", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "4420", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.470 "hdgst": false, 00:16:30.470 "ddgst": false 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 }' 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme1", 00:16:30.470 "trtype": "tcp", 00:16:30.470 "traddr": "10.0.0.2", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "4420", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.470 "hdgst": false, 00:16:30.470 "ddgst": false 00:16:30.470 }, 00:16:30.470 "method": "bdev_nvme_attach_controller" 00:16:30.470 }' 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:30.470 11:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.470 "params": { 00:16:30.470 "name": "Nvme1", 00:16:30.470 "trtype": "tcp", 00:16:30.470 "traddr": "10.0.0.2", 00:16:30.470 "adrfam": "ipv4", 00:16:30.470 "trsvcid": "4420", 00:16:30.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.471 "hdgst": false, 00:16:30.471 "ddgst": false 00:16:30.471 }, 00:16:30.471 "method": "bdev_nvme_attach_controller" 00:16:30.471 }' 00:16:30.471 [2024-06-10 11:23:27.533806] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:30.471 [2024-06-10 11:23:27.533868] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:30.471 [2024-06-10 11:23:27.535639] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:30.471 [2024-06-10 11:23:27.535687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:30.471 [2024-06-10 11:23:27.536294] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:30.471 [2024-06-10 11:23:27.536338] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:30.471 [2024-06-10 11:23:27.538026] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:30.471 [2024-06-10 11:23:27.538069] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:30.471 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.471 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.471 [2024-06-10 11:23:27.690357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.730 [2024-06-10 11:23:27.739860] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:16:30.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.730 [2024-06-10 11:23:27.764732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.730 [2024-06-10 11:23:27.800413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.730 [2024-06-10 11:23:27.813440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:16:30.730 [2024-06-10 11:23:27.848372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:16:30.730 [2024-06-10 11:23:27.849446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.730 [2024-06-10 11:23:27.896998] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:16:30.730 Running I/O for 1 seconds... 00:16:30.990 Running I/O for 1 seconds... 00:16:30.990 Running I/O for 1 seconds... 00:16:30.990 Running I/O for 1 seconds... 00:16:31.974 00:16:31.974 Latency(us) 00:16:31.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.974 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:31.974 Nvme1n1 : 1.00 200868.91 784.64 0.00 0.00 634.32 253.64 702.62 00:16:31.974 =================================================================================================================== 00:16:31.974 Total : 200868.91 784.64 0.00 0.00 634.32 253.64 702.62 00:16:31.974 00:16:31.974 Latency(us) 00:16:31.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.974 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:31.974 Nvme1n1 : 1.01 8750.13 34.18 0.00 0.00 14524.13 6351.95 23492.14 00:16:31.974 =================================================================================================================== 00:16:31.974 Total : 8750.13 34.18 0.00 0.00 14524.13 6351.95 23492.14 00:16:31.974 00:16:31.974 Latency(us) 00:16:31.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.974 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:31.974 Nvme1n1 : 1.00 18823.83 73.53 0.00 0.00 6783.33 3780.92 16031.11 00:16:31.974 =================================================================================================================== 00:16:31.974 Total : 18823.83 73.53 0.00 0.00 6783.33 3780.92 16031.11 00:16:31.974 00:16:31.974 Latency(us) 00:16:31.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.974 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:31.974 Nvme1n1 : 1.00 8685.43 33.93 0.00 0.00 14703.63 4108.60 34683.67 00:16:31.974 =================================================================================================================== 00:16:31.974 Total : 8685.43 33.93 0.00 0.00 14703.63 4108.60 34683.67 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1501695 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1501697 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1501699 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.234 rmmod nvme_tcp 00:16:32.234 rmmod nvme_fabrics 00:16:32.234 rmmod nvme_keyring 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1501375 ']' 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1501375 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1501375 ']' 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1501375 00:16:32.234 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1501375 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1501375' 00:16:32.235 killing process with pid 1501375 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1501375 00:16:32.235 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1501375 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.495 11:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.406 11:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.406 00:16:34.406 real 0m13.316s 00:16:34.406 user 0m19.002s 00:16:34.406 sys 0m7.402s 00:16:34.406 11:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:34.406 11:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:34.406 ************************************ 00:16:34.406 END TEST nvmf_bdev_io_wait 00:16:34.406 ************************************ 00:16:34.669 11:23:31 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.669 11:23:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:34.669 11:23:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:34.669 11:23:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.669 ************************************ 00:16:34.669 START TEST nvmf_queue_depth 00:16:34.669 ************************************ 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.669 * Looking for test storage... 00:16:34.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.669 11:23:31 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.670 11:23:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.854 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:42.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:42.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:42.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:42.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.855 11:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:16:43.118 00:16:43.118 --- 10.0.0.2 ping statistics --- 00:16:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.118 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:16:43.118 00:16:43.118 --- 10.0.0.1 ping statistics --- 00:16:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.118 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1506388 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1506388 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1506388 ']' 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:43.118 11:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.118 [2024-06-10 11:23:40.237677] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:43.118 [2024-06-10 11:23:40.237740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.118 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.118 [2024-06-10 11:23:40.310916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.378 [2024-06-10 11:23:40.381357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.378 [2024-06-10 11:23:40.381395] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.378 [2024-06-10 11:23:40.381402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.378 [2024-06-10 11:23:40.381408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.378 [2024-06-10 11:23:40.381413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.378 [2024-06-10 11:23:40.381436] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.949 11:23:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.950 [2024-06-10 11:23:41.118923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.950 Malloc0 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.950 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:44.210 [2024-06-10 11:23:41.184621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1506547 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1506547 /var/tmp/bdevperf.sock 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1506547 ']' 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:44.210 11:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:44.210 [2024-06-10 11:23:41.237488] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:16:44.210 [2024-06-10 11:23:41.237535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506547 ] 00:16:44.210 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.210 [2024-06-10 11:23:41.315722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.210 [2024-06-10 11:23:41.376882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.150 NVMe0n1 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.150 11:23:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.150 Running I/O for 10 seconds... 00:16:57.376 00:16:57.376 Latency(us) 00:16:57.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.376 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:57.376 Verification LBA range: start 0x0 length 0x4000 00:16:57.376 NVMe0n1 : 10.06 10313.24 40.29 0.00 0.00 98856.63 15728.64 66140.95 00:16:57.376 =================================================================================================================== 00:16:57.376 Total : 10313.24 40.29 0.00 0.00 98856.63 15728.64 66140.95 00:16:57.376 0 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1506547 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1506547 ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1506547 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1506547 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1506547' 00:16:57.377 killing process with pid 1506547 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1506547 00:16:57.377 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.377 00:16:57.377 Latency(us) 00:16:57.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.377 =================================================================================================================== 00:16:57.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1506547 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.377 rmmod nvme_tcp 00:16:57.377 rmmod nvme_fabrics 00:16:57.377 rmmod nvme_keyring 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1506388 ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1506388 ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1506388' 00:16:57.377 killing process with pid 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1506388 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.377 11:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.949 11:23:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.949 00:16:57.949 real 0m23.306s 00:16:57.949 user 0m26.348s 00:16:57.949 sys 0m7.328s 00:16:57.949 11:23:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:57.949 11:23:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:57.949 ************************************ 00:16:57.949 END TEST nvmf_queue_depth 00:16:57.949 ************************************ 00:16:57.949 11:23:55 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:57.949 11:23:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:57.949 11:23:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:57.949 11:23:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.949 ************************************ 00:16:57.949 START TEST nvmf_target_multipath 00:16:57.949 ************************************ 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:57.949 * Looking for test storage... 00:16:57.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.949 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.950 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.950 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.210 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.211 11:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.351 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:06.352 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:06.352 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:06.352 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:06.352 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.352 11:24:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:17:06.352 00:17:06.352 --- 10.0.0.2 ping statistics --- 00:17:06.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.352 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:17:06.352 00:17:06.352 --- 10.0.0.1 ping statistics --- 00:17:06.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.352 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:06.352 only one NIC for nvmf test 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:06.352 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.353 rmmod nvme_tcp 00:17:06.353 rmmod nvme_fabrics 00:17:06.353 rmmod nvme_keyring 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.353 11:24:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.267 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.528 00:17:08.528 real 0m10.451s 00:17:08.528 user 0m2.280s 00:17:08.528 sys 0m6.097s 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:08.528 11:24:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:08.528 ************************************ 00:17:08.528 END TEST nvmf_target_multipath 00:17:08.528 ************************************ 00:17:08.528 11:24:05 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:08.528 11:24:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:08.528 11:24:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:08.528 11:24:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.528 ************************************ 00:17:08.528 START TEST nvmf_zcopy 00:17:08.528 ************************************ 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:08.528 * Looking for test storage... 00:17:08.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.528 11:24:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.529 11:24:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:16.672 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.672 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.672 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.672 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:16.673 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:16.673 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:16.673 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:16.673 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:17:16.673 00:17:16.673 --- 10.0.0.2 ping statistics --- 00:17:16.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.673 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:17:16.673 00:17:16.673 --- 10.0.0.1 ping statistics --- 00:17:16.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.673 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.673 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1517857 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1517857 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1517857 ']' 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 11:24:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.934 [2024-06-10 11:24:13.958237] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:17:16.934 [2024-06-10 11:24:13.958301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.934 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.934 [2024-06-10 11:24:14.032014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.934 [2024-06-10 11:24:14.101555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.934 [2024-06-10 11:24:14.101592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.934 [2024-06-10 11:24:14.101600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.934 [2024-06-10 11:24:14.101606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.934 [2024-06-10 11:24:14.101611] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.934 [2024-06-10 11:24:14.101629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 [2024-06-10 11:24:14.838709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 [2024-06-10 11:24:14.854852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 malloc0 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.874 { 00:17:17.874 "params": { 00:17:17.874 "name": "Nvme$subsystem", 00:17:17.874 "trtype": "$TEST_TRANSPORT", 00:17:17.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.874 "adrfam": "ipv4", 00:17:17.874 "trsvcid": "$NVMF_PORT", 00:17:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.874 "hdgst": ${hdgst:-false}, 00:17:17.874 "ddgst": ${ddgst:-false} 00:17:17.874 }, 00:17:17.874 "method": "bdev_nvme_attach_controller" 00:17:17.874 } 00:17:17.874 EOF 00:17:17.874 )") 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:17.874 11:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.874 "params": { 00:17:17.874 "name": "Nvme1", 00:17:17.874 "trtype": "tcp", 00:17:17.874 "traddr": "10.0.0.2", 00:17:17.874 "adrfam": "ipv4", 00:17:17.874 "trsvcid": "4420", 00:17:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.874 "hdgst": false, 00:17:17.874 "ddgst": false 00:17:17.874 }, 00:17:17.874 "method": "bdev_nvme_attach_controller" 00:17:17.874 }' 00:17:17.874 [2024-06-10 11:24:14.932322] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:17:17.874 [2024-06-10 11:24:14.932370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517957 ] 00:17:17.874 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.874 [2024-06-10 11:24:15.012739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.874 [2024-06-10 11:24:15.074290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.134 Running I/O for 10 seconds... 00:17:28.222 00:17:28.222 Latency(us) 00:17:28.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.222 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:28.222 Verification LBA range: start 0x0 length 0x1000 00:17:28.222 Nvme1n1 : 10.01 7406.74 57.87 0.00 0.00 17229.22 1241.40 26819.35 00:17:28.222 =================================================================================================================== 00:17:28.222 Total : 7406.74 57.87 0.00 0.00 17229.22 1241.40 26819.35 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1519776 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.484 { 00:17:28.484 "params": { 00:17:28.484 "name": "Nvme$subsystem", 00:17:28.484 "trtype": "$TEST_TRANSPORT", 00:17:28.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.484 "adrfam": "ipv4", 00:17:28.484 "trsvcid": "$NVMF_PORT", 00:17:28.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.484 "hdgst": ${hdgst:-false}, 00:17:28.484 "ddgst": ${ddgst:-false} 00:17:28.484 }, 00:17:28.484 "method": "bdev_nvme_attach_controller" 00:17:28.484 } 00:17:28.484 EOF 00:17:28.484 )") 00:17:28.484 [2024-06-10 11:24:25.499265] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:28.484 [2024-06-10 11:24:25.499296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:28.484 [2024-06-10 11:24:25.507255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.507267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:28.484 11:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.484 "params": { 00:17:28.484 "name": "Nvme1", 00:17:28.484 "trtype": "tcp", 00:17:28.484 "traddr": "10.0.0.2", 00:17:28.484 "adrfam": "ipv4", 00:17:28.484 "trsvcid": "4420", 00:17:28.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.484 "hdgst": false, 00:17:28.484 "ddgst": false 00:17:28.484 }, 00:17:28.484 "method": "bdev_nvme_attach_controller" 00:17:28.484 }' 00:17:28.484 [2024-06-10 11:24:25.515276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.515288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.523298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.523307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.531319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.531329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.539339] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.539348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.539342] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:17:28.484 [2024-06-10 11:24:25.539389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519776 ] 00:17:28.484 [2024-06-10 11:24:25.547360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.547369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.555382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.555391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.563403] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.563411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.484 [2024-06-10 11:24:25.571425] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.571433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.579446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.579455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.587467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.587475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.595489] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.595498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.603509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.603517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.611529] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.611538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.618628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.484 [2024-06-10 11:24:25.619549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.619558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.627571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.627582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.635591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.635601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.643611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.643621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.651635] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.651650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.659654] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.659663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.667676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.667685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.484 [2024-06-10 11:24:25.675698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.484 [2024-06-10 11:24:25.675706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.485 [2024-06-10 11:24:25.679611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.485 [2024-06-10 11:24:25.683719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.485 [2024-06-10 11:24:25.683727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.485 [2024-06-10 11:24:25.691747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.485 [2024-06-10 11:24:25.691760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.485 [2024-06-10 11:24:25.699767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.485 [2024-06-10 11:24:25.699780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.711799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.711812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.719816] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.719832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.727844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.727853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.735863] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.735871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.743886] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.743894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.751921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.751937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.759931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.759942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.767950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.767965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.775975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.775987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.783993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.784002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.792012] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.792021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.800032] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.800040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.808053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.808061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.816084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.816094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.824099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.824110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.832121] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.832131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.840143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.840151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.848167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.848176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.856190] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.856199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.864213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.864222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.872235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.872246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.880256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.746 [2024-06-10 11:24:25.880265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.746 [2024-06-10 11:24:25.888278] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.888287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.896300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.896309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.904322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.904330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.912345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.912354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.920367] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.920385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.928390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.928399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.936412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.936421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.944435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.944443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.952456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.952464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.747 [2024-06-10 11:24:25.960479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.747 [2024-06-10 11:24:25.960488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.010297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.010316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.016637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.016648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 Running I/O for 5 seconds... 00:17:29.008 [2024-06-10 11:24:26.024655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.024664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.038006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.038025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.048652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.048670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.058282] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.058300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.065712] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.065729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.076986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.077004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.086406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.086424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.094449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.094467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.105914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.105932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.113842] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.113859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.122797] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.122815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.131590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.131608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.140195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.140212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.148867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.148885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.157840] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.157857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.166572] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.166589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.175254] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.175272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.183998] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.184015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.192600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.192616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.201440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.201457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.210355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.210372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.218997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.219014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.008 [2024-06-10 11:24:26.227703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.008 [2024-06-10 11:24:26.227721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.236608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.236625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.245349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.245366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.253877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.253894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.262475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.262492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.271145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.271162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.279854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.279871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.288778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.288795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.297418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.297435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.306161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.306178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.314865] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.314882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.323663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.323680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.332378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.332395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.341232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.341248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.349660] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.349677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.358589] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.358606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.367451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.367468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.376279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.376295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.385108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.385125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.393498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.393516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.402684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.402702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.411228] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.411245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.420034] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.269 [2024-06-10 11:24:26.420051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.269 [2024-06-10 11:24:26.428837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.428854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.437650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.437667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.446507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.446524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.455136] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.455153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.463787] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.463805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.472286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.472303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.481132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.481149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.270 [2024-06-10 11:24:26.489845] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.270 [2024-06-10 11:24:26.489861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.498604] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.498623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.507844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.507861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.516514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.516531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.525462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.525480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.534300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.534317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.543111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.543129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.551923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.551940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.560419] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.560437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.568962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.568979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.577648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.577665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.586431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.586448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.595156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.595172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.604023] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.604040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.612978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.612995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.621708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.621729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.630479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.630496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.639127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.639144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.648150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.648168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.656786] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.656802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.665497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.665514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.674376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.674394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.683172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.683188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.691936] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.691954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.700635] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.700653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.709490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.709508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.718528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.718545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.727425] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.727443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.736138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.736155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.745030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.745047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.531 [2024-06-10 11:24:26.753616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.531 [2024-06-10 11:24:26.753633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.762379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.762395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.771104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.771120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.779761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.779778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.788671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.788692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.797258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.797276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.806094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.806112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.814940] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.814958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.823794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.823812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.832705] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.832721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.841473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.841490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.850290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.850307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.859009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.859027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.867740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.867757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.876434] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.876452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.885170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.885187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.893966] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.893983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.902769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.902787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.911724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.911742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.792 [2024-06-10 11:24:26.920366] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.792 [2024-06-10 11:24:26.920383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.928990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.929008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.937700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.937718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.946517] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.946535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.955402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.955422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.964080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.964097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.972682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.972699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.981404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.981421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.989955] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.989973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:26.998813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:26.998835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:27.006747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:27.006763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.793 [2024-06-10 11:24:27.016057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.793 [2024-06-10 11:24:27.016074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.053 [2024-06-10 11:24:27.024844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.024861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.033324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.033341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.041900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.041917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.050581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.050597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.059395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.059412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.068143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.068160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.076963] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.076981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.085900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.085916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.094509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.094525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.103057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.103074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.111775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.111792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.120292] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.120312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.129088] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.129105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.138079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.138096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.147049] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.147066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.155842] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.155859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.164507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.164523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.173122] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.173139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.181948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.181965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.190806] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.190828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.200777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.200794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.208267] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.208283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.219547] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.219564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.228078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.228095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.236653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.236670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.245449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.245466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.254253] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.254270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.262977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.262994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.054 [2024-06-10 11:24:27.271584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.054 [2024-06-10 11:24:27.271601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.280290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.280307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.288861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.288878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.297565] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.297583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.306163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.306180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.314896] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.314913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.323626] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.323644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.332325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.332343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.341154] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.341172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.350028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.350044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.358658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.358675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.367520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.367537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.376338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.376354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.384893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.384910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.393454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.393471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.402365] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.402381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.411175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.411192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.419690] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.419707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.428326] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.428342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.437019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.437036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.445786] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.445803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.454426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.454442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.462954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.462970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.471607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.471624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.480248] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.480265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.489064] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.489081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.497797] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.497813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.506441] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.506458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.515001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.515017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.523628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.523644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.316 [2024-06-10 11:24:27.532497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.316 [2024-06-10 11:24:27.532514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.541001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.541018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.549541] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.549558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.558062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.558080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.566804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.566826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.575474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.575491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.584276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.584293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.593097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.593114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.601677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.601693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.610542] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.610559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.619435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.619451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.627971] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.627987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.636549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.636565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.645158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.645174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.653899] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.653915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.662510] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.662527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.671232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.671249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.679846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.679862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.688313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.688329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.697140] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.697157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.705796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.705812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.714624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.714640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.723373] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.723390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.577 [2024-06-10 11:24:27.732100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.577 [2024-06-10 11:24:27.732117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.740507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.740524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.749226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.749243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.757656] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.757672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.766149] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.766165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.774778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.774795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.783393] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.783409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.792129] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.792145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.578 [2024-06-10 11:24:27.800998] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.578 [2024-06-10 11:24:27.801015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.809778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.809795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.818528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.818546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.827148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.827164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.835850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.835866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.844567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.844584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.853353] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.853370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.862172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.862189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.871268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.871286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.879811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.879833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.888649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.888665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.897292] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.897309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.905974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.905990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.914809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.914832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.923380] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.923397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.932221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.932239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.940938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.940960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.949698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.949715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.958332] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.958348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.967147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.967164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.975895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.975913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.984702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.984719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:27.993513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:27.993531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.002261] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.002278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.010895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.010913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.019432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.019449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.027971] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.027987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.036515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.036531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.045305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.045321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.054127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.054144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.840 [2024-06-10 11:24:28.063014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.840 [2024-06-10 11:24:28.063030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.071799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.071816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.080586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.080603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.089378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.089395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.098031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.098048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.106752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.106773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.115667] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.115684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.124510] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.124527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.133155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.133172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.141735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.141752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.150566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.150583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.159420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.159437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.168282] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.168299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.176976] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.176993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.185889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.185906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.194703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.194720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.203304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.203321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.211920] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.211938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.220536] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.220553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.229217] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.229235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.237825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.237842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.246795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.246811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.255406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.255423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.264096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.264113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.272845] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.272869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.281358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.281375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.290419] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.290436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.299280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.299296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.307954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.307971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.101 [2024-06-10 11:24:28.316546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.101 [2024-06-10 11:24:28.316563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.325129] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.325145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.333623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.333640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.342298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.342315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.351079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.351096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.359897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.359913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.368698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.368714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.377576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.377594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.386348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.386366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.395145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.395162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.404009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.404026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.412696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.412712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.421648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.421665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.430323] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.430340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.439244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.439264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.448101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.448118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.456482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.456498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.465025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.465042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.473479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.473496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.482488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.482505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.491088] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.491106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.499724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.499740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.508491] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.508507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.517234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.517251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.525884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.525901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.534663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.534679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.543451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.543469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.552311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.552328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.561159] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.561176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.569814] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.569843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.362 [2024-06-10 11:24:28.578569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.362 [2024-06-10 11:24:28.578587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.587199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.587217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.596025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.596043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.604698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.604716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.613510] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.613528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.622305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.622322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.631226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.631243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.640074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.640091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.648501] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.648518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.657311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.657328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.666041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.666059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.674723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.674740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.683378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.683395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.692260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.692278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.701123] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.701140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.709819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.709841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.718546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.718563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.727148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.727165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.735811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.735833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.744293] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.744310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.752924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.752941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.761648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.761665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.770458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.770476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.779174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.779190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.787829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.787845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.796450] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.796467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.806423] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.806440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.815989] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.816005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.823488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.622 [2024-06-10 11:24:28.823505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.622 [2024-06-10 11:24:28.834577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.623 [2024-06-10 11:24:28.834594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.623 [2024-06-10 11:24:28.842755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.623 [2024-06-10 11:24:28.842772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.851962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.851979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.860901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.860918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.869538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.869555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.878361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.878378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.887375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.887392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.896002] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.896019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.904692] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.904709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.913546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.913563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.922032] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.922049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.930625] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.930641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.938912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.938929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.948155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.948171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.958166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.958183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.968017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.968034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.977657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.977674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.985297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.985313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:28.996438] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:28.996456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.004944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.004961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.013662] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.013678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.022528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.022546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.031268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.031285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.039919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.039935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.048711] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.048728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.057455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.057472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.066114] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.066131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.074437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.074453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.083507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.083523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.092387] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.092405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.882 [2024-06-10 11:24:29.101256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.882 [2024-06-10 11:24:29.101273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.109952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.109970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.118619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.118636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.127439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.127456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.136048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.136065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.144859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.144876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.153839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.153856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.162695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.162712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.171532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.171550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.180246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.180262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.188986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.189002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.197967] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.197984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.206777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.206793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.215618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.215634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.224407] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.224424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.233012] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.233029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.241532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.241549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.250233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.250250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.258777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.258795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.267348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.267369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.275950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.275968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.284862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.284878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.293549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.293565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.302321] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.302338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.311070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.311087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.319793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.319809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.328025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.328041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.336871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.336887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.345367] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.345383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.353955] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.353972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.143 [2024-06-10 11:24:29.362766] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.143 [2024-06-10 11:24:29.362783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.371632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.371650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.380311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.380328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.388941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.388959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.397463] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.397480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.406008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.406025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.414796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.414812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.423563] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.423580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.432355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.432375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.441413] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.441430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.403 [2024-06-10 11:24:29.450083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.403 [2024-06-10 11:24:29.450100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.468199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.468216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.476464] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.476481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.487034] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.487051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.495031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.495048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.506132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.506149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.515861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.515877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.523361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.523377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.534867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.534884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.543371] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.543387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.553477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.553493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.561910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.561926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.572537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.572553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.582416] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.582433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.590368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.590385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.602010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.602028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.610328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.610345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.404 [2024-06-10 11:24:29.620792] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.404 [2024-06-10 11:24:29.620813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.663 [2024-06-10 11:24:29.630696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.663 [2024-06-10 11:24:29.630713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.663 [2024-06-10 11:24:29.640222] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.663 [2024-06-10 11:24:29.640238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.649836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.649852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.659266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.659282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.668840] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.668857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.678342] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.678359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.688014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.688031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.696088] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.696104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.706812] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.706833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.715101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.715118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.725298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.725315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.734939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.734955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.744566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.744582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.752344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.752360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.763104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.763121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.772784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.772800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.782291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.782307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.790156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.790172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.801906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.801927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.810531] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.810548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.822051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.822068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.831649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.831665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.841122] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.841138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.850728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.850745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.860330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.860346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.870844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.870860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.664 [2024-06-10 11:24:29.880489] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.664 [2024-06-10 11:24:29.880505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.888176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.888193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.899473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.899491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.907439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.907455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.918930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.918947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.927477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.927493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.936258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.936275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.944884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.944901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.953596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.953612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.962665] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.962682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.971555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.971572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.980308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.980325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.989171] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.989187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:29.998154] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:29.998171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.010774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.010793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.026221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.026242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.037627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.037645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.054176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.054194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.070005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.070023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.081551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.081568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.098286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.098304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.113863] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.113881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.125677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.125695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.141774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.141792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.157885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.157903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.169222] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.169240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.185210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.185227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.201544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.201562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.998 [2024-06-10 11:24:30.213266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.998 [2024-06-10 11:24:30.213283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.229675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.229693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.245272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.245290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.256687] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.256704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.273490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.273508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.289801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.289818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.305952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.305970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.320278] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.320295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.335483] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.335500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.347204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.347220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.363055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.363072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.378938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.378956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.393058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.393076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.408360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.408376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.424699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.424717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.436019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.259 [2024-06-10 11:24:30.436036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.259 [2024-06-10 11:24:30.452445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.260 [2024-06-10 11:24:30.452462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.260 [2024-06-10 11:24:30.468392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.260 [2024-06-10 11:24:30.468409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.260 [2024-06-10 11:24:30.479963] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.260 [2024-06-10 11:24:30.479980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.496099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.496116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.512285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.512303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.526428] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.526446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.542677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.542694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.558738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.558754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.574826] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.574844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.586790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.586807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.602817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.602839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.619006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.619023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.633234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.633252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.649168] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.649185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.665587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.665605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.681358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.681375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.697548] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.697565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.713359] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.713376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.729241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.729258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.521 [2024-06-10 11:24:30.745073] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.521 [2024-06-10 11:24:30.745089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.761679] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.761696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.777961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.777978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.789575] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.789592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.805578] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.805594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.821747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.821764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.833381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.833398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.849641] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.849658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.865359] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.865376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.880663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.880680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.897215] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.897232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.913774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.913791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.925364] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.925381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.941609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.941625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.957987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.958005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.969714] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.969731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:30.985624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:30.985640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.782 [2024-06-10 11:24:31.002308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.782 [2024-06-10 11:24:31.002324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.042 [2024-06-10 11:24:31.018509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.018526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.033738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.033755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 00:17:34.043 Latency(us) 00:17:34.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.043 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:34.043 Nvme1n1 : 5.01 14570.39 113.83 0.00 0.00 8776.59 4159.02 20164.92 00:17:34.043 =================================================================================================================== 00:17:34.043 Total : 14570.39 113.83 0.00 0.00 8776.59 4159.02 20164.92 00:17:34.043 [2024-06-10 11:24:31.045921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.045942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.057949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.057962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.069987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.070001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.082014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.082027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.094044] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.094058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.106074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.106087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.118105] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.118116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.130139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.130150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.142171] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.142182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.154202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.154214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 [2024-06-10 11:24:31.166232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.043 [2024-06-10 11:24:31.166241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1519776) - No such process 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1519776 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:34.043 delay0 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.043 11:24:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:34.043 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.304 [2024-06-10 11:24:31.322018] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:42.442 Initializing NVMe Controllers 00:17:42.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.442 Initialization complete. Launching workers. 00:17:42.442 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 26819 00:17:42.442 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26936, failed to submit 122 00:17:42.442 success 26857, unsuccess 79, failed 0 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.442 rmmod nvme_tcp 00:17:42.442 rmmod nvme_fabrics 00:17:42.442 rmmod nvme_keyring 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1517857 ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1517857 ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1517857' 00:17:42.442 killing process with pid 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1517857 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.442 11:24:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.825 11:24:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.825 00:17:43.825 real 0m35.082s 00:17:43.825 user 0m45.927s 00:17:43.825 sys 0m11.336s 00:17:43.825 11:24:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:43.825 11:24:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.825 ************************************ 00:17:43.825 END TEST nvmf_zcopy 00:17:43.825 ************************************ 00:17:43.825 11:24:40 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:43.825 11:24:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:43.825 11:24:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:43.825 11:24:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.825 ************************************ 00:17:43.825 START TEST nvmf_nmic 00:17:43.825 ************************************ 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:43.825 * Looking for test storage... 00:17:43.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.825 11:24:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.826 11:24:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.981 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:51.982 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:51.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:51.982 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:51.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.982 11:24:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:17:51.982 00:17:51.982 --- 10.0.0.2 ping statistics --- 00:17:51.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.982 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:17:51.982 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:17:52.243 00:17:52.243 --- 10.0.0.1 ping statistics --- 00:17:52.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.244 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1526397 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1526397 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1526397 ']' 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:52.244 11:24:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:52.244 [2024-06-10 11:24:49.291951] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:17:52.244 [2024-06-10 11:24:49.292006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.244 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.244 [2024-06-10 11:24:49.381284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.505 [2024-06-10 11:24:49.476439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.505 [2024-06-10 11:24:49.476499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.505 [2024-06-10 11:24:49.476507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.505 [2024-06-10 11:24:49.476514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.505 [2024-06-10 11:24:49.476519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.505 [2024-06-10 11:24:49.476647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.505 [2024-06-10 11:24:49.476791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.505 [2024-06-10 11:24:49.476948] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.505 [2024-06-10 11:24:49.476949] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 [2024-06-10 11:24:50.187399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 Malloc0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 [2024-06-10 11:24:50.243578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:53.077 test case1: single bdev can't be used in multiple subsystems 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.077 [2024-06-10 11:24:50.279528] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:53.077 [2024-06-10 11:24:50.279546] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:53.077 [2024-06-10 11:24:50.279553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.077 request: 00:17:53.077 { 00:17:53.077 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:53.077 "namespace": { 00:17:53.077 "bdev_name": "Malloc0", 00:17:53.077 "no_auto_visible": false 00:17:53.077 }, 00:17:53.077 "method": "nvmf_subsystem_add_ns", 00:17:53.077 "req_id": 1 00:17:53.077 } 00:17:53.077 Got JSON-RPC error response 00:17:53.077 response: 00:17:53.077 { 00:17:53.077 "code": -32602, 00:17:53.077 "message": "Invalid parameters" 00:17:53.077 } 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:53.077 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:53.078 Adding namespace failed - expected result. 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:53.078 test case2: host connect to nvmf target in multiple paths 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:53.078 [2024-06-10 11:24:50.291638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.078 11:24:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.990 11:24:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:56.374 11:24:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.374 11:24:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:17:56.374 11:24:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.374 11:24:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:17:56.374 11:24:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:17:58.286 11:24:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:58.286 [global] 00:17:58.286 thread=1 00:17:58.286 invalidate=1 00:17:58.286 rw=write 00:17:58.286 time_based=1 00:17:58.286 runtime=1 00:17:58.286 ioengine=libaio 00:17:58.286 direct=1 00:17:58.286 bs=4096 00:17:58.286 iodepth=1 00:17:58.286 norandommap=0 00:17:58.286 numjobs=1 00:17:58.286 00:17:58.286 verify_dump=1 00:17:58.286 verify_backlog=512 00:17:58.286 verify_state_save=0 00:17:58.286 do_verify=1 00:17:58.286 verify=crc32c-intel 00:17:58.286 [job0] 00:17:58.286 filename=/dev/nvme0n1 00:17:58.286 Could not set queue depth (nvme0n1) 00:17:58.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:58.545 fio-3.35 00:17:58.545 Starting 1 thread 00:17:59.924 00:17:59.924 job0: (groupid=0, jobs=1): err= 0: pid=1527511: Mon Jun 10 11:24:56 2024 00:17:59.924 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:17:59.924 slat (nsec): min=10365, max=27021, avg=24925.47, stdev=3577.39 00:17:59.924 clat (usec): min=40898, max=41443, avg=40991.04, stdev=117.45 00:17:59.924 lat (usec): min=40925, max=41453, avg=41015.96, stdev=114.13 00:17:59.924 clat percentiles (usec): 00:17:59.924 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:59.924 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:59.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:59.924 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:59.924 | 99.99th=[41681] 00:17:59.924 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:17:59.924 slat (nsec): min=9478, max=63585, avg=26691.03, stdev=9854.28 00:17:59.924 clat (usec): min=224, max=558, avg=437.45, stdev=56.88 00:17:59.924 lat (usec): min=234, max=589, avg=464.14, stdev=58.65 00:17:59.924 clat percentiles (usec): 00:17:59.924 | 1.00th=[ 260], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 388], 00:17:59.924 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 465], 00:17:59.924 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 506], 00:17:59.924 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 562], 00:17:59.924 | 99.99th=[ 562] 00:17:59.924 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:59.924 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:59.924 lat (usec) : 250=0.56%, 500=89.83%, 750=6.03% 00:17:59.924 lat (msec) : 50=3.58% 00:17:59.924 cpu : usr=0.59%, sys=1.47%, ctx=531, majf=0, minf=1 00:17:59.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.924 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.924 00:17:59.924 Run status group 0 (all jobs): 00:17:59.924 READ: bw=74.5KiB/s (76.3kB/s), 74.5KiB/s-74.5KiB/s (76.3kB/s-76.3kB/s), io=76.0KiB (77.8kB), run=1020-1020msec 00:17:59.924 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:17:59.924 00:17:59.924 Disk stats (read/write): 00:17:59.924 nvme0n1: ios=66/512, merge=0/0, ticks=724/226, in_queue=950, util=94.19% 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:59.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:59.924 rmmod nvme_tcp 00:17:59.924 rmmod nvme_fabrics 00:17:59.924 rmmod nvme_keyring 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1526397 ']' 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1526397 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1526397 ']' 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1526397 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1526397 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1526397' 00:17:59.924 killing process with pid 1526397 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1526397 00:17:59.924 11:24:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1526397 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.924 11:24:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.469 11:24:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:02.469 00:18:02.469 real 0m18.462s 00:18:02.469 user 0m40.856s 00:18:02.469 sys 0m6.952s 00:18:02.469 11:24:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:02.469 11:24:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 ************************************ 00:18:02.469 END TEST nvmf_nmic 00:18:02.469 ************************************ 00:18:02.469 11:24:59 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:02.469 11:24:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:02.469 11:24:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:02.469 11:24:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 ************************************ 00:18:02.469 START TEST nvmf_fio_target 00:18:02.469 ************************************ 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:02.469 * Looking for test storage... 00:18:02.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.469 11:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:10.667 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:10.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:10.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.667 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:10.668 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:18:10.668 00:18:10.668 --- 10.0.0.2 ping statistics --- 00:18:10.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.668 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:18:10.668 00:18:10.668 --- 10.0.0.1 ping statistics --- 00:18:10.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.668 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1532034 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1532034 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1532034 ']' 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:10.668 11:25:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.668 [2024-06-10 11:25:07.564040] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:18:10.668 [2024-06-10 11:25:07.564100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.668 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.668 [2024-06-10 11:25:07.657472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.668 [2024-06-10 11:25:07.750538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.668 [2024-06-10 11:25:07.750599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.668 [2024-06-10 11:25:07.750607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.668 [2024-06-10 11:25:07.750614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.668 [2024-06-10 11:25:07.750620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.668 [2024-06-10 11:25:07.750688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.668 [2024-06-10 11:25:07.751084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.668 [2024-06-10 11:25:07.751211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.668 [2024-06-10 11:25:07.751212] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.239 11:25:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:11.239 11:25:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:18:11.239 11:25:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.239 11:25:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:11.239 11:25:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.498 11:25:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.498 11:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.498 [2024-06-10 11:25:08.643001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.498 11:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:11.757 11:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:11.757 11:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.017 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:12.017 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.277 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:12.277 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.538 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:12.538 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:12.538 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.799 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:12.799 11:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.058 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:13.058 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.317 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:13.317 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:13.575 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:13.575 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:13.575 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.833 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:13.833 11:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:14.093 11:25:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.353 [2024-06-10 11:25:11.364506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.353 11:25:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:14.613 11:25:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:14.613 11:25:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:18:16.519 11:25:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:18:18.424 11:25:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:18.424 [global] 00:18:18.424 thread=1 00:18:18.424 invalidate=1 00:18:18.424 rw=write 00:18:18.424 time_based=1 00:18:18.424 runtime=1 00:18:18.424 ioengine=libaio 00:18:18.424 direct=1 00:18:18.424 bs=4096 00:18:18.424 iodepth=1 00:18:18.424 norandommap=0 00:18:18.424 numjobs=1 00:18:18.424 00:18:18.424 verify_dump=1 00:18:18.424 verify_backlog=512 00:18:18.424 verify_state_save=0 00:18:18.424 do_verify=1 00:18:18.424 verify=crc32c-intel 00:18:18.424 [job0] 00:18:18.424 filename=/dev/nvme0n1 00:18:18.424 [job1] 00:18:18.424 filename=/dev/nvme0n2 00:18:18.424 [job2] 00:18:18.424 filename=/dev/nvme0n3 00:18:18.424 [job3] 00:18:18.424 filename=/dev/nvme0n4 00:18:18.424 Could not set queue depth (nvme0n1) 00:18:18.424 Could not set queue depth (nvme0n2) 00:18:18.424 Could not set queue depth (nvme0n3) 00:18:18.425 Could not set queue depth (nvme0n4) 00:18:18.683 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.683 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.683 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.683 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.683 fio-3.35 00:18:18.683 Starting 4 threads 00:18:20.061 00:18:20.061 job0: (groupid=0, jobs=1): err= 0: pid=1533755: Mon Jun 10 11:25:16 2024 00:18:20.061 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:20.061 slat (nsec): min=7265, max=45103, avg=26678.52, stdev=2258.67 00:18:20.061 clat (usec): min=696, max=1316, avg=1015.64, stdev=101.56 00:18:20.061 lat (usec): min=723, max=1343, avg=1042.32, stdev=101.47 00:18:20.061 clat percentiles (usec): 00:18:20.061 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 938], 00:18:20.061 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:18:20.061 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:18:20.061 | 99.00th=[ 1188], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1319], 00:18:20.061 | 99.99th=[ 1319] 00:18:20.061 write: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec); 0 zone resets 00:18:20.061 slat (nsec): min=9183, max=76929, avg=30418.30, stdev=10213.27 00:18:20.061 clat (usec): min=258, max=854, avg=598.36, stdev=113.12 00:18:20.061 lat (usec): min=269, max=904, avg=628.78, stdev=117.41 00:18:20.061 clat percentiles (usec): 00:18:20.061 | 1.00th=[ 318], 5.00th=[ 396], 10.00th=[ 437], 20.00th=[ 498], 00:18:20.061 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:18:20.061 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:18:20.061 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 857], 99.95th=[ 857], 00:18:20.061 | 99.99th=[ 857] 00:18:20.061 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:20.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:20.061 lat (usec) : 500=11.95%, 750=42.70%, 1000=20.02% 00:18:20.061 lat (msec) : 2=25.34% 00:18:20.061 cpu : usr=3.30%, sys=4.10%, ctx=1242, majf=0, minf=1 00:18:20.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.061 issued rwts: total=512,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.061 job1: (groupid=0, jobs=1): err= 0: pid=1533756: Mon Jun 10 11:25:16 2024 00:18:20.061 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:20.061 slat (nsec): min=6636, max=61031, avg=27415.29, stdev=4175.26 00:18:20.061 clat (usec): min=540, max=1172, avg=968.33, stdev=94.08 00:18:20.061 lat (usec): min=547, max=1199, avg=995.75, stdev=94.74 00:18:20.061 clat percentiles (usec): 00:18:20.061 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 857], 20.00th=[ 906], 00:18:20.061 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:18:20.061 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:18:20.061 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:18:20.061 | 99.99th=[ 1172] 00:18:20.061 write: IOPS=740, BW=2961KiB/s (3032kB/s)(2964KiB/1001msec); 0 zone resets 00:18:20.061 slat (nsec): min=8964, max=57066, avg=30829.84, stdev=10318.96 00:18:20.061 clat (usec): min=249, max=899, avg=617.42, stdev=108.48 00:18:20.061 lat (usec): min=258, max=934, avg=648.25, stdev=112.96 00:18:20.061 clat percentiles (usec): 00:18:20.061 | 1.00th=[ 347], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 529], 00:18:20.061 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:18:20.061 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 775], 00:18:20.061 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 898], 99.95th=[ 898], 00:18:20.062 | 99.99th=[ 898] 00:18:20.062 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:20.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:20.062 lat (usec) : 250=0.08%, 500=8.22%, 750=46.93%, 1000=28.09% 00:18:20.062 lat (msec) : 2=16.68% 00:18:20.062 cpu : usr=2.60%, sys=4.80%, ctx=1254, majf=0, minf=1 00:18:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 issued rwts: total=512,741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.062 job2: (groupid=0, jobs=1): err= 0: pid=1533757: Mon Jun 10 11:25:16 2024 00:18:20.062 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:18:20.062 slat (nsec): min=26216, max=27199, avg=26882.29, stdev=312.42 00:18:20.062 clat (usec): min=1102, max=42042, avg=39378.53, stdev=9870.15 00:18:20.062 lat (usec): min=1129, max=42069, avg=39405.42, stdev=9870.26 00:18:20.062 clat percentiles (usec): 00:18:20.062 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41157], 00:18:20.062 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:18:20.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:20.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:20.062 | 99.99th=[42206] 00:18:20.062 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:18:20.062 slat (nsec): min=9357, max=54954, avg=31395.20, stdev=9305.52 00:18:20.062 clat (usec): min=240, max=1379, avg=625.92, stdev=116.95 00:18:20.062 lat (usec): min=250, max=1418, avg=657.32, stdev=120.65 00:18:20.062 clat percentiles (usec): 00:18:20.062 | 1.00th=[ 330], 5.00th=[ 433], 10.00th=[ 490], 20.00th=[ 529], 00:18:20.062 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:18:20.062 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:18:20.062 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 1385], 99.95th=[ 1385], 00:18:20.062 | 99.99th=[ 1385] 00:18:20.062 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:20.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:20.062 lat (usec) : 250=0.19%, 500=11.15%, 750=74.29%, 1000=10.78% 00:18:20.062 lat (msec) : 2=0.57%, 50=3.02% 00:18:20.062 cpu : usr=0.89%, sys=2.18%, ctx=530, majf=0, minf=1 00:18:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.062 job3: (groupid=0, jobs=1): err= 0: pid=1533758: Mon Jun 10 11:25:16 2024 00:18:20.062 read: IOPS=532, BW=2130KiB/s (2181kB/s)(2132KiB/1001msec) 00:18:20.062 slat (nsec): min=6585, max=45437, avg=25565.56, stdev=6843.62 00:18:20.062 clat (usec): min=402, max=1063, avg=791.15, stdev=95.90 00:18:20.062 lat (usec): min=429, max=1090, avg=816.72, stdev=97.79 00:18:20.062 clat percentiles (usec): 00:18:20.062 | 1.00th=[ 506], 5.00th=[ 619], 10.00th=[ 660], 20.00th=[ 717], 00:18:20.062 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 832], 00:18:20.062 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 922], 00:18:20.062 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1057], 99.95th=[ 1057], 00:18:20.062 | 99.99th=[ 1057] 00:18:20.062 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:20.062 slat (nsec): min=8922, max=73723, avg=31643.55, stdev=9242.23 00:18:20.062 clat (usec): min=183, max=1033, avg=509.24, stdev=155.80 00:18:20.062 lat (usec): min=192, max=1067, avg=540.89, stdev=157.49 00:18:20.062 clat percentiles (usec): 00:18:20.062 | 1.00th=[ 247], 5.00th=[ 285], 10.00th=[ 330], 20.00th=[ 363], 00:18:20.062 | 30.00th=[ 424], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 529], 00:18:20.062 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 725], 95.00th=[ 824], 00:18:20.062 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 1012], 99.95th=[ 1037], 00:18:20.062 | 99.99th=[ 1037] 00:18:20.062 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:20.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:20.062 lat (usec) : 250=0.83%, 500=34.68%, 750=34.94%, 1000=29.35% 00:18:20.062 lat (msec) : 2=0.19% 00:18:20.062 cpu : usr=3.30%, sys=5.50%, ctx=1558, majf=0, minf=1 00:18:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.062 issued rwts: total=533,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.062 00:18:20.062 Run status group 0 (all jobs): 00:18:20.062 READ: bw=6234KiB/s (6383kB/s), 67.3KiB/s-2130KiB/s (68.9kB/s-2181kB/s), io=6296KiB (6447kB), run=1001-1010msec 00:18:20.062 WRITE: bw=11.6MiB/s (12.2MB/s), 2028KiB/s-4092KiB/s (2076kB/s-4190kB/s), io=11.7MiB (12.3MB), run=1001-1010msec 00:18:20.062 00:18:20.062 Disk stats (read/write): 00:18:20.062 nvme0n1: ios=511/512, merge=0/0, ticks=1421/245, in_queue=1666, util=97.80% 00:18:20.062 nvme0n2: ios=560/512, merge=0/0, ticks=881/240, in_queue=1121, util=98.07% 00:18:20.062 nvme0n3: ios=71/512, merge=0/0, ticks=1078/275, in_queue=1353, util=98.11% 00:18:20.062 nvme0n4: ios=569/743, merge=0/0, ticks=902/353, in_queue=1255, util=97.99% 00:18:20.062 11:25:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:20.062 [global] 00:18:20.062 thread=1 00:18:20.062 invalidate=1 00:18:20.062 rw=randwrite 00:18:20.062 time_based=1 00:18:20.062 runtime=1 00:18:20.062 ioengine=libaio 00:18:20.062 direct=1 00:18:20.062 bs=4096 00:18:20.062 iodepth=1 00:18:20.062 norandommap=0 00:18:20.062 numjobs=1 00:18:20.062 00:18:20.062 verify_dump=1 00:18:20.062 verify_backlog=512 00:18:20.062 verify_state_save=0 00:18:20.062 do_verify=1 00:18:20.062 verify=crc32c-intel 00:18:20.062 [job0] 00:18:20.062 filename=/dev/nvme0n1 00:18:20.062 [job1] 00:18:20.062 filename=/dev/nvme0n2 00:18:20.062 [job2] 00:18:20.062 filename=/dev/nvme0n3 00:18:20.062 [job3] 00:18:20.062 filename=/dev/nvme0n4 00:18:20.062 Could not set queue depth (nvme0n1) 00:18:20.062 Could not set queue depth (nvme0n2) 00:18:20.062 Could not set queue depth (nvme0n3) 00:18:20.062 Could not set queue depth (nvme0n4) 00:18:20.062 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.062 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.062 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.062 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.062 fio-3.35 00:18:20.062 Starting 4 threads 00:18:21.443 00:18:21.443 job0: (groupid=0, jobs=1): err= 0: pid=1534221: Mon Jun 10 11:25:18 2024 00:18:21.443 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:18:21.443 slat (nsec): min=24450, max=25378, avg=24747.47, stdev=235.99 00:18:21.443 clat (usec): min=972, max=42093, avg=39486.09, stdev=9927.28 00:18:21.443 lat (usec): min=997, max=42118, avg=39510.84, stdev=9927.33 00:18:21.443 clat percentiles (usec): 00:18:21.443 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[41157], 20.00th=[41681], 00:18:21.443 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:18:21.443 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:21.443 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:21.443 | 99.99th=[42206] 00:18:21.443 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:18:21.443 slat (nsec): min=9545, max=62296, avg=28204.31, stdev=8508.65 00:18:21.443 clat (usec): min=281, max=903, avg=627.43, stdev=113.13 00:18:21.443 lat (usec): min=291, max=934, avg=655.63, stdev=116.63 00:18:21.443 clat percentiles (usec): 00:18:21.443 | 1.00th=[ 310], 5.00th=[ 424], 10.00th=[ 482], 20.00th=[ 529], 00:18:21.443 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:18:21.443 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:18:21.443 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 906], 99.95th=[ 906], 00:18:21.443 | 99.99th=[ 906] 00:18:21.443 bw ( KiB/s): min= 4096, max= 4096, per=38.01%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.443 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.443 lat (usec) : 500=13.23%, 750=72.02%, 1000=11.72% 00:18:21.443 lat (msec) : 50=3.02% 00:18:21.443 cpu : usr=0.89%, sys=1.19%, ctx=532, majf=0, minf=1 00:18:21.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.443 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.443 job1: (groupid=0, jobs=1): err= 0: pid=1534230: Mon Jun 10 11:25:18 2024 00:18:21.443 read: IOPS=740, BW=2961KiB/s (3032kB/s)(2964KiB/1001msec) 00:18:21.443 slat (nsec): min=5966, max=59380, avg=23811.30, stdev=7092.74 00:18:21.443 clat (usec): min=235, max=909, avg=652.19, stdev=106.66 00:18:21.443 lat (usec): min=241, max=935, avg=676.00, stdev=108.91 00:18:21.443 clat percentiles (usec): 00:18:21.443 | 1.00th=[ 355], 5.00th=[ 457], 10.00th=[ 519], 20.00th=[ 562], 00:18:21.443 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:18:21.443 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 799], 00:18:21.443 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 914], 99.95th=[ 914], 00:18:21.443 | 99.99th=[ 914] 00:18:21.443 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:21.443 slat (nsec): min=8150, max=63014, avg=27612.33, stdev=9598.93 00:18:21.443 clat (usec): min=149, max=764, avg=447.15, stdev=108.83 00:18:21.444 lat (usec): min=161, max=796, avg=474.76, stdev=113.39 00:18:21.444 clat percentiles (usec): 00:18:21.444 | 1.00th=[ 188], 5.00th=[ 265], 10.00th=[ 297], 20.00th=[ 359], 00:18:21.444 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 453], 60.00th=[ 482], 00:18:21.444 | 70.00th=[ 515], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 627], 00:18:21.444 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 742], 99.95th=[ 766], 00:18:21.444 | 99.99th=[ 766] 00:18:21.444 bw ( KiB/s): min= 4096, max= 4096, per=38.01%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.444 lat (usec) : 250=1.81%, 500=40.28%, 750=49.97%, 1000=7.93% 00:18:21.444 cpu : usr=3.30%, sys=6.40%, ctx=1765, majf=0, minf=1 00:18:21.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 issued rwts: total=741,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.444 job2: (groupid=0, jobs=1): err= 0: pid=1534231: Mon Jun 10 11:25:18 2024 00:18:21.444 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:18:21.444 slat (nsec): min=10560, max=31795, avg=27343.62, stdev=5466.36 00:18:21.444 clat (usec): min=862, max=42068, avg=33412.59, stdev=16367.03 00:18:21.444 lat (usec): min=877, max=42095, avg=33439.94, stdev=16370.51 00:18:21.444 clat percentiles (usec): 00:18:21.444 | 1.00th=[ 865], 5.00th=[ 898], 10.00th=[ 988], 20.00th=[28181], 00:18:21.444 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:18:21.444 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:21.444 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:21.444 | 99.99th=[42206] 00:18:21.444 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:18:21.444 slat (nsec): min=9540, max=59119, avg=29691.17, stdev=11315.77 00:18:21.444 clat (usec): min=192, max=861, avg=567.80, stdev=114.54 00:18:21.444 lat (usec): min=202, max=893, avg=597.49, stdev=117.71 00:18:21.444 clat percentiles (usec): 00:18:21.444 | 1.00th=[ 314], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 478], 00:18:21.444 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:18:21.444 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 742], 00:18:21.444 | 99.00th=[ 807], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:18:21.444 | 99.99th=[ 865] 00:18:21.444 bw ( KiB/s): min= 4096, max= 4096, per=38.01%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.444 lat (usec) : 250=0.38%, 500=26.45%, 750=64.92%, 1000=4.88% 00:18:21.444 lat (msec) : 2=0.19%, 50=3.19% 00:18:21.444 cpu : usr=0.40%, sys=2.47%, ctx=534, majf=0, minf=1 00:18:21.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.444 job3: (groupid=0, jobs=1): err= 0: pid=1534232: Mon Jun 10 11:25:18 2024 00:18:21.444 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:21.444 slat (nsec): min=25776, max=59008, avg=26829.91, stdev=3584.59 00:18:21.444 clat (usec): min=767, max=1312, avg=1084.27, stdev=82.10 00:18:21.444 lat (usec): min=794, max=1356, avg=1111.10, stdev=82.25 00:18:21.444 clat percentiles (usec): 00:18:21.444 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1029], 00:18:21.444 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:18:21.444 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:18:21.444 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1319], 00:18:21.444 | 99.99th=[ 1319] 00:18:21.444 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:18:21.444 slat (nsec): min=8631, max=54133, avg=27781.70, stdev=10105.17 00:18:21.444 clat (usec): min=258, max=882, avg=595.07, stdev=112.46 00:18:21.444 lat (usec): min=282, max=918, avg=622.85, stdev=117.15 00:18:21.444 clat percentiles (usec): 00:18:21.444 | 1.00th=[ 302], 5.00th=[ 383], 10.00th=[ 429], 20.00th=[ 498], 00:18:21.444 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 635], 00:18:21.444 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 758], 00:18:21.444 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:18:21.444 | 99.99th=[ 881] 00:18:21.444 bw ( KiB/s): min= 4096, max= 4096, per=38.01%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.444 lat (usec) : 500=12.02%, 750=41.85%, 1000=9.24% 00:18:21.444 lat (msec) : 2=36.89% 00:18:21.444 cpu : usr=2.00%, sys=4.90%, ctx=1190, majf=0, minf=1 00:18:21.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.444 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.444 00:18:21.444 Run status group 0 (all jobs): 00:18:21.444 READ: bw=5103KiB/s (5225kB/s), 67.2KiB/s-2961KiB/s (68.8kB/s-3032kB/s), io=5164KiB (5288kB), run=1001-1012msec 00:18:21.444 WRITE: bw=10.5MiB/s (11.0MB/s), 2024KiB/s-4092KiB/s (2072kB/s-4190kB/s), io=10.6MiB (11.2MB), run=1001-1012msec 00:18:21.444 00:18:21.444 Disk stats (read/write): 00:18:21.444 nvme0n1: ios=64/512, merge=0/0, ticks=1046/310, in_queue=1356, util=98.30% 00:18:21.444 nvme0n2: ios=532/861, merge=0/0, ticks=316/310, in_queue=626, util=80.58% 00:18:21.444 nvme0n3: ios=32/512, merge=0/0, ticks=1289/270, in_queue=1559, util=98.43% 00:18:21.444 nvme0n4: ios=428/512, merge=0/0, ticks=528/250, in_queue=778, util=97.58% 00:18:21.444 11:25:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:21.444 [global] 00:18:21.444 thread=1 00:18:21.444 invalidate=1 00:18:21.444 rw=write 00:18:21.444 time_based=1 00:18:21.444 runtime=1 00:18:21.444 ioengine=libaio 00:18:21.444 direct=1 00:18:21.444 bs=4096 00:18:21.444 iodepth=128 00:18:21.444 norandommap=0 00:18:21.444 numjobs=1 00:18:21.444 00:18:21.444 verify_dump=1 00:18:21.444 verify_backlog=512 00:18:21.444 verify_state_save=0 00:18:21.444 do_verify=1 00:18:21.444 verify=crc32c-intel 00:18:21.444 [job0] 00:18:21.444 filename=/dev/nvme0n1 00:18:21.444 [job1] 00:18:21.444 filename=/dev/nvme0n2 00:18:21.444 [job2] 00:18:21.444 filename=/dev/nvme0n3 00:18:21.444 [job3] 00:18:21.444 filename=/dev/nvme0n4 00:18:21.702 Could not set queue depth (nvme0n1) 00:18:21.702 Could not set queue depth (nvme0n2) 00:18:21.702 Could not set queue depth (nvme0n3) 00:18:21.702 Could not set queue depth (nvme0n4) 00:18:21.962 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.962 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.962 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.962 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.962 fio-3.35 00:18:21.962 Starting 4 threads 00:18:23.341 00:18:23.341 job0: (groupid=0, jobs=1): err= 0: pid=1534631: Mon Jun 10 11:25:20 2024 00:18:23.341 read: IOPS=6065, BW=23.7MiB/s (24.8MB/s)(24.0MiB/1013msec) 00:18:23.341 slat (nsec): min=1210, max=12067k, avg=85939.44, stdev=637804.18 00:18:23.341 clat (usec): min=3662, max=22513, avg=10889.33, stdev=2992.72 00:18:23.341 lat (usec): min=3670, max=22561, avg=10975.27, stdev=3026.98 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 4490], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8717], 00:18:23.341 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:18:23.341 | 70.00th=[11863], 80.00th=[13173], 90.00th=[15401], 95.00th=[16581], 00:18:23.341 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21627], 99.95th=[21890], 00:18:23.341 | 99.99th=[22414] 00:18:23.341 write: IOPS=6269, BW=24.5MiB/s (25.7MB/s)(24.8MiB/1013msec); 0 zone resets 00:18:23.341 slat (usec): min=2, max=9776, avg=66.48, stdev=349.32 00:18:23.341 clat (usec): min=1190, max=30239, avg=9701.64, stdev=4214.08 00:18:23.341 lat (usec): min=1201, max=30248, avg=9768.12, stdev=4235.68 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 3195], 5.00th=[ 4948], 10.00th=[ 5669], 20.00th=[ 7439], 00:18:23.341 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9503], 00:18:23.341 | 70.00th=[10159], 80.00th=[10421], 90.00th=[12780], 95.00th=[19006], 00:18:23.341 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278], 00:18:23.341 | 99.99th=[30278] 00:18:23.341 bw ( KiB/s): min=24576, max=25208, per=27.07%, avg=24892.00, stdev=446.89, samples=2 00:18:23.341 iops : min= 6144, max= 6302, avg=6223.00, stdev=111.72, samples=2 00:18:23.341 lat (msec) : 2=0.02%, 4=1.55%, 10=56.54%, 20=39.89%, 50=2.00% 00:18:23.341 cpu : usr=5.63%, sys=5.63%, ctx=765, majf=0, minf=1 00:18:23.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:23.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.341 issued rwts: total=6144,6351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.341 job1: (groupid=0, jobs=1): err= 0: pid=1534648: Mon Jun 10 11:25:20 2024 00:18:23.341 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:18:23.341 slat (nsec): min=1177, max=9845.8k, avg=71719.36, stdev=436255.38 00:18:23.341 clat (usec): min=5075, max=61766, avg=9096.31, stdev=4605.09 00:18:23.341 lat (usec): min=5147, max=61776, avg=9168.03, stdev=4634.85 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7242], 00:18:23.341 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8291], 00:18:23.341 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[14746], 00:18:23.341 | 99.00th=[33817], 99.50th=[36963], 99.90th=[61604], 99.95th=[61604], 00:18:23.341 | 99.99th=[61604] 00:18:23.341 write: IOPS=5808, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1008msec); 0 zone resets 00:18:23.341 slat (usec): min=2, max=16035, avg=97.17, stdev=534.57 00:18:23.341 clat (usec): min=3404, max=78716, avg=13035.51, stdev=14805.16 00:18:23.341 lat (usec): min=3412, max=78725, avg=13132.68, stdev=14905.44 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6652], 00:18:23.341 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:18:23.341 | 70.00th=[ 8160], 80.00th=[12518], 90.00th=[27657], 95.00th=[52167], 00:18:23.341 | 99.00th=[72877], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:18:23.341 | 99.99th=[79168] 00:18:23.341 bw ( KiB/s): min=13056, max=32768, per=24.91%, avg=22912.00, stdev=13938.49, samples=2 00:18:23.341 iops : min= 3264, max= 8192, avg=5728.00, stdev=3484.62, samples=2 00:18:23.341 lat (msec) : 4=0.10%, 10=78.37%, 20=13.79%, 50=4.74%, 100=3.00% 00:18:23.341 cpu : usr=4.27%, sys=4.97%, ctx=804, majf=0, minf=1 00:18:23.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:23.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.341 issued rwts: total=5632,5855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.341 job2: (groupid=0, jobs=1): err= 0: pid=1534666: Mon Jun 10 11:25:20 2024 00:18:23.341 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:18:23.341 slat (nsec): min=1187, max=6319.9k, avg=92513.07, stdev=548046.36 00:18:23.341 clat (usec): min=6006, max=22896, avg=11421.35, stdev=2521.52 00:18:23.341 lat (usec): min=6008, max=22902, avg=11513.86, stdev=2561.10 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9634], 00:18:23.341 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:18:23.341 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14091], 95.00th=[16319], 00:18:23.341 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22938], 99.95th=[22938], 00:18:23.341 | 99.99th=[22938] 00:18:23.341 write: IOPS=5234, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:18:23.341 slat (usec): min=2, max=13123, avg=95.13, stdev=559.50 00:18:23.341 clat (usec): min=3749, max=46422, avg=12764.93, stdev=6809.76 00:18:23.341 lat (usec): min=4447, max=46431, avg=12860.05, stdev=6850.44 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 5538], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:18:23.341 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11469], 00:18:23.341 | 70.00th=[11863], 80.00th=[12518], 90.00th=[15795], 95.00th=[33424], 00:18:23.341 | 99.00th=[39060], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:18:23.341 | 99.99th=[46400] 00:18:23.341 bw ( KiB/s): min=18488, max=22592, per=22.33%, avg=20540.00, stdev=2901.97, samples=2 00:18:23.341 iops : min= 4622, max= 5648, avg=5135.00, stdev=725.49, samples=2 00:18:23.341 lat (msec) : 4=0.01%, 10=23.35%, 20=71.73%, 50=4.91% 00:18:23.341 cpu : usr=3.29%, sys=5.78%, ctx=665, majf=0, minf=1 00:18:23.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:23.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.341 issued rwts: total=5120,5255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.341 job3: (groupid=0, jobs=1): err= 0: pid=1534672: Mon Jun 10 11:25:20 2024 00:18:23.341 read: IOPS=5559, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1013msec) 00:18:23.341 slat (nsec): min=1267, max=17551k, avg=80153.18, stdev=643946.34 00:18:23.341 clat (usec): min=1529, max=55610, avg=11179.04, stdev=3657.22 00:18:23.341 lat (usec): min=1555, max=55614, avg=11259.20, stdev=3700.11 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 8291], 00:18:23.341 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10945], 60.00th=[11338], 00:18:23.341 | 70.00th=[11863], 80.00th=[13435], 90.00th=[15401], 95.00th=[18744], 00:18:23.341 | 99.00th=[20579], 99.50th=[21365], 99.90th=[49546], 99.95th=[49546], 00:18:23.341 | 99.99th=[55837] 00:18:23.341 write: IOPS=5755, BW=22.5MiB/s (23.6MB/s)(22.8MiB/1013msec); 0 zone resets 00:18:23.341 slat (usec): min=2, max=38433, avg=74.75, stdev=648.36 00:18:23.341 clat (usec): min=571, max=46508, avg=11235.01, stdev=6450.38 00:18:23.341 lat (usec): min=606, max=46517, avg=11309.76, stdev=6483.69 00:18:23.341 clat percentiles (usec): 00:18:23.341 | 1.00th=[ 2343], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6915], 00:18:23.341 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10945], 00:18:23.341 | 70.00th=[11600], 80.00th=[13698], 90.00th=[20317], 95.00th=[22676], 00:18:23.341 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:18:23.341 | 99.99th=[46400] 00:18:23.341 bw ( KiB/s): min=22664, max=22960, per=24.80%, avg=22812.00, stdev=209.30, samples=2 00:18:23.342 iops : min= 5666, max= 5740, avg=5703.00, stdev=52.33, samples=2 00:18:23.342 lat (usec) : 750=0.02% 00:18:23.342 lat (msec) : 2=0.30%, 4=1.66%, 10=42.72%, 20=48.83%, 50=6.47% 00:18:23.342 lat (msec) : 100=0.01% 00:18:23.342 cpu : usr=3.95%, sys=6.82%, ctx=500, majf=0, minf=1 00:18:23.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:23.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.342 issued rwts: total=5632,5830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.342 00:18:23.342 Run status group 0 (all jobs): 00:18:23.342 READ: bw=86.9MiB/s (91.1MB/s), 19.9MiB/s-23.7MiB/s (20.9MB/s-24.8MB/s), io=88.0MiB (92.3MB), run=1004-1013msec 00:18:23.342 WRITE: bw=89.8MiB/s (94.2MB/s), 20.4MiB/s-24.5MiB/s (21.4MB/s-25.7MB/s), io=91.0MiB (95.4MB), run=1004-1013msec 00:18:23.342 00:18:23.342 Disk stats (read/write): 00:18:23.342 nvme0n1: ios=5170/5632, merge=0/0, ticks=53434/50032, in_queue=103466, util=92.38% 00:18:23.342 nvme0n2: ios=5006/5120, merge=0/0, ticks=22885/29852, in_queue=52737, util=98.07% 00:18:23.342 nvme0n3: ios=4123/4511, merge=0/0, ticks=23041/25712, in_queue=48753, util=92.48% 00:18:23.342 nvme0n4: ios=4654/4967, merge=0/0, ticks=49631/48314, in_queue=97945, util=100.00% 00:18:23.342 11:25:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:23.342 [global] 00:18:23.342 thread=1 00:18:23.342 invalidate=1 00:18:23.342 rw=randwrite 00:18:23.342 time_based=1 00:18:23.342 runtime=1 00:18:23.342 ioengine=libaio 00:18:23.342 direct=1 00:18:23.342 bs=4096 00:18:23.342 iodepth=128 00:18:23.342 norandommap=0 00:18:23.342 numjobs=1 00:18:23.342 00:18:23.342 verify_dump=1 00:18:23.342 verify_backlog=512 00:18:23.342 verify_state_save=0 00:18:23.342 do_verify=1 00:18:23.342 verify=crc32c-intel 00:18:23.342 [job0] 00:18:23.342 filename=/dev/nvme0n1 00:18:23.342 [job1] 00:18:23.342 filename=/dev/nvme0n2 00:18:23.342 [job2] 00:18:23.342 filename=/dev/nvme0n3 00:18:23.342 [job3] 00:18:23.342 filename=/dev/nvme0n4 00:18:23.342 Could not set queue depth (nvme0n1) 00:18:23.342 Could not set queue depth (nvme0n2) 00:18:23.342 Could not set queue depth (nvme0n3) 00:18:23.342 Could not set queue depth (nvme0n4) 00:18:23.602 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:23.602 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:23.602 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:23.602 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:23.602 fio-3.35 00:18:23.602 Starting 4 threads 00:18:24.984 00:18:24.984 job0: (groupid=0, jobs=1): err= 0: pid=1535027: Mon Jun 10 11:25:21 2024 00:18:24.984 read: IOPS=9278, BW=36.2MiB/s (38.0MB/s)(36.5MiB/1007msec) 00:18:24.984 slat (nsec): min=1213, max=6411.7k, avg=54924.95, stdev=389999.49 00:18:24.984 clat (usec): min=2299, max=13170, avg=7348.59, stdev=1560.20 00:18:24.984 lat (usec): min=2304, max=13173, avg=7403.51, stdev=1583.15 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6456], 00:18:24.984 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:18:24.984 | 70.00th=[ 7570], 80.00th=[ 8455], 90.00th=[ 9372], 95.00th=[10814], 00:18:24.984 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13173], 99.95th=[13173], 00:18:24.984 | 99.99th=[13173] 00:18:24.984 write: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(38.0MiB/1007msec); 0 zone resets 00:18:24.984 slat (usec): min=2, max=5405, avg=45.44, stdev=303.62 00:18:24.984 clat (usec): min=1138, max=12936, avg=6071.21, stdev=1563.69 00:18:24.984 lat (usec): min=1148, max=12939, avg=6116.65, stdev=1570.40 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 2311], 5.00th=[ 3654], 10.00th=[ 3982], 20.00th=[ 4359], 00:18:24.984 | 30.00th=[ 5276], 40.00th=[ 6128], 50.00th=[ 6521], 60.00th=[ 6718], 00:18:24.984 | 70.00th=[ 6849], 80.00th=[ 6915], 90.00th=[ 8160], 95.00th=[ 8848], 00:18:24.984 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[12387], 99.95th=[12518], 00:18:24.984 | 99.99th=[12911] 00:18:24.984 bw ( KiB/s): min=38536, max=39280, per=41.90%, avg=38908.00, stdev=526.09, samples=2 00:18:24.984 iops : min= 9634, max= 9820, avg=9727.00, stdev=131.52, samples=2 00:18:24.984 lat (msec) : 2=0.28%, 4=5.48%, 10=90.47%, 20=3.78% 00:18:24.984 cpu : usr=6.26%, sys=9.24%, ctx=757, majf=0, minf=1 00:18:24.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:24.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.984 issued rwts: total=9343,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.984 job1: (groupid=0, jobs=1): err= 0: pid=1535042: Mon Jun 10 11:25:21 2024 00:18:24.984 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:18:24.984 slat (nsec): min=1170, max=9963.9k, avg=106449.21, stdev=686388.43 00:18:24.984 clat (usec): min=4971, max=35282, avg=12175.15, stdev=4626.29 00:18:24.984 lat (usec): min=4979, max=35284, avg=12281.60, stdev=4681.62 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 6194], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[ 9372], 00:18:24.984 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10945], 60.00th=[11338], 00:18:24.984 | 70.00th=[11731], 80.00th=[13566], 90.00th=[18220], 95.00th=[23462], 00:18:24.984 | 99.00th=[30540], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:18:24.984 | 99.99th=[35390] 00:18:24.984 write: IOPS=4371, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:18:24.984 slat (usec): min=2, max=12746, avg=122.88, stdev=593.63 00:18:24.984 clat (usec): min=2122, max=35280, avg=17719.65, stdev=6687.61 00:18:24.984 lat (usec): min=2130, max=35282, avg=17842.54, stdev=6735.01 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 4817], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[10159], 00:18:24.984 | 30.00th=[13960], 40.00th=[17433], 50.00th=[19006], 60.00th=[19268], 00:18:24.984 | 70.00th=[20055], 80.00th=[23725], 90.00th=[27132], 95.00th=[28705], 00:18:24.984 | 99.00th=[29230], 99.50th=[31065], 99.90th=[34341], 99.95th=[34341], 00:18:24.984 | 99.99th=[35390] 00:18:24.984 bw ( KiB/s): min=16432, max=17800, per=18.43%, avg=17116.00, stdev=967.32, samples=2 00:18:24.984 iops : min= 4108, max= 4450, avg=4279.00, stdev=241.83, samples=2 00:18:24.984 lat (msec) : 4=0.42%, 10=26.31%, 20=54.15%, 50=19.11% 00:18:24.984 cpu : usr=3.37%, sys=4.07%, ctx=498, majf=0, minf=1 00:18:24.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:24.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.984 issued rwts: total=4096,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.984 job2: (groupid=0, jobs=1): err= 0: pid=1535053: Mon Jun 10 11:25:21 2024 00:18:24.984 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:18:24.984 slat (nsec): min=1970, max=27892k, avg=218197.78, stdev=1547463.50 00:18:24.984 clat (usec): min=11981, max=87954, avg=27703.68, stdev=16711.67 00:18:24.984 lat (usec): min=15110, max=87960, avg=27921.88, stdev=16762.01 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[14615], 5.00th=[15664], 10.00th=[16712], 20.00th=[18744], 00:18:24.984 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:18:24.984 | 70.00th=[21365], 80.00th=[35390], 90.00th=[49021], 95.00th=[60031], 00:18:24.984 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:18:24.984 | 99.99th=[87557] 00:18:24.984 write: IOPS=2747, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1002msec); 0 zone resets 00:18:24.984 slat (usec): min=6, max=25613, avg=154.27, stdev=1086.21 00:18:24.984 clat (usec): min=1865, max=63096, avg=20426.80, stdev=11338.73 00:18:24.984 lat (usec): min=5382, max=63116, avg=20581.07, stdev=11366.77 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 5604], 5.00th=[12649], 10.00th=[14091], 20.00th=[14353], 00:18:24.984 | 30.00th=[14484], 40.00th=[14615], 50.00th=[15008], 60.00th=[15795], 00:18:24.984 | 70.00th=[17433], 80.00th=[28181], 90.00th=[41681], 95.00th=[45351], 00:18:24.984 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:18:24.984 | 99.99th=[63177] 00:18:24.984 bw ( KiB/s): min= 8720, max=12288, per=11.31%, avg=10504.00, stdev=2522.96, samples=2 00:18:24.984 iops : min= 2180, max= 3072, avg=2626.00, stdev=630.74, samples=2 00:18:24.984 lat (msec) : 2=0.02%, 10=1.20%, 20=68.68%, 50=25.92%, 100=4.18% 00:18:24.984 cpu : usr=1.80%, sys=3.70%, ctx=165, majf=0, minf=1 00:18:24.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.984 issued rwts: total=2560,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.984 job3: (groupid=0, jobs=1): err= 0: pid=1535056: Mon Jun 10 11:25:21 2024 00:18:24.984 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:18:24.984 slat (nsec): min=1229, max=9339.5k, avg=85780.06, stdev=643837.05 00:18:24.984 clat (usec): min=4000, max=19679, avg=10816.26, stdev=2439.76 00:18:24.984 lat (usec): min=4004, max=22921, avg=10902.04, stdev=2491.96 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 5145], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9503], 00:18:24.984 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:18:24.984 | 70.00th=[10945], 80.00th=[12518], 90.00th=[14484], 95.00th=[16319], 00:18:24.984 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[19006], 00:18:24.984 | 99.99th=[19792] 00:18:24.984 write: IOPS=6475, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1009msec); 0 zone resets 00:18:24.984 slat (usec): min=2, max=8226, avg=67.29, stdev=419.34 00:18:24.984 clat (usec): min=1528, max=19313, avg=9423.52, stdev=2211.20 00:18:24.984 lat (usec): min=1538, max=19316, avg=9490.81, stdev=2243.02 00:18:24.984 clat percentiles (usec): 00:18:24.984 | 1.00th=[ 3458], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 7832], 00:18:24.984 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:18:24.984 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[13042], 00:18:24.984 | 99.00th=[16581], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:18:24.984 | 99.99th=[19268] 00:18:24.984 bw ( KiB/s): min=25584, max=25672, per=27.60%, avg=25628.00, stdev=62.23, samples=2 00:18:24.984 iops : min= 6396, max= 6418, avg=6407.00, stdev=15.56, samples=2 00:18:24.984 lat (msec) : 2=0.07%, 4=0.89%, 10=47.43%, 20=51.61% 00:18:24.984 cpu : usr=5.26%, sys=5.75%, ctx=641, majf=0, minf=1 00:18:24.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:24.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.985 issued rwts: total=6144,6534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.985 00:18:24.985 Run status group 0 (all jobs): 00:18:24.985 READ: bw=85.7MiB/s (89.9MB/s), 9.98MiB/s-36.2MiB/s (10.5MB/s-38.0MB/s), io=86.5MiB (90.7MB), run=1002-1009msec 00:18:24.985 WRITE: bw=90.7MiB/s (95.1MB/s), 10.7MiB/s-37.7MiB/s (11.3MB/s-39.6MB/s), io=91.5MiB (95.9MB), run=1002-1009msec 00:18:24.985 00:18:24.985 Disk stats (read/write): 00:18:24.985 nvme0n1: ios=7946/8192, merge=0/0, ticks=54828/47567, in_queue=102395, util=96.49% 00:18:24.985 nvme0n2: ios=3622/3639, merge=0/0, ticks=42301/61136, in_queue=103437, util=88.41% 00:18:24.985 nvme0n3: ios=2093/2336, merge=0/0, ticks=14973/14181, in_queue=29154, util=92.79% 00:18:24.985 nvme0n4: ios=5172/5478, merge=0/0, ticks=53797/50027, in_queue=103824, util=93.35% 00:18:24.985 11:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:24.985 11:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1535204 00:18:24.985 11:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:24.985 11:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:24.985 [global] 00:18:24.985 thread=1 00:18:24.985 invalidate=1 00:18:24.985 rw=read 00:18:24.985 time_based=1 00:18:24.985 runtime=10 00:18:24.985 ioengine=libaio 00:18:24.985 direct=1 00:18:24.985 bs=4096 00:18:24.985 iodepth=1 00:18:24.985 norandommap=1 00:18:24.985 numjobs=1 00:18:24.985 00:18:24.985 [job0] 00:18:24.985 filename=/dev/nvme0n1 00:18:24.985 [job1] 00:18:24.985 filename=/dev/nvme0n2 00:18:24.985 [job2] 00:18:24.985 filename=/dev/nvme0n3 00:18:24.985 [job3] 00:18:24.985 filename=/dev/nvme0n4 00:18:24.985 Could not set queue depth (nvme0n1) 00:18:24.985 Could not set queue depth (nvme0n2) 00:18:24.985 Could not set queue depth (nvme0n3) 00:18:24.985 Could not set queue depth (nvme0n4) 00:18:24.985 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.985 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.985 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.985 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.985 fio-3.35 00:18:24.985 Starting 4 threads 00:18:27.675 11:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:27.935 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9699328, buflen=4096 00:18:27.935 fio: pid=1535483, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:27.935 11:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:28.196 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10321920, buflen=4096 00:18:28.196 fio: pid=1535470, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:28.196 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.196 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:28.196 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11145216, buflen=4096 00:18:28.196 fio: pid=1535414, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:28.196 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.196 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:28.457 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.457 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:28.458 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=315392, buflen=4096 00:18:28.458 fio: pid=1535433, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:28.458 00:18:28.458 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1535414: Mon Jun 10 11:25:25 2024 00:18:28.458 read: IOPS=923, BW=3691KiB/s (3779kB/s)(10.6MiB/2949msec) 00:18:28.458 slat (usec): min=6, max=16227, avg=40.83, stdev=453.75 00:18:28.458 clat (usec): min=543, max=41460, avg=1036.48, stdev=781.09 00:18:28.458 lat (usec): min=567, max=41483, avg=1077.32, stdev=903.74 00:18:28.458 clat percentiles (usec): 00:18:28.458 | 1.00th=[ 734], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 955], 00:18:28.458 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:18:28.458 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1139], 00:18:28.458 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1287], 99.95th=[ 1336], 00:18:28.458 | 99.99th=[41681] 00:18:28.458 bw ( KiB/s): min= 3464, max= 3824, per=38.59%, avg=3724.80, stdev=147.79, samples=5 00:18:28.458 iops : min= 866, max= 956, avg=931.20, stdev=36.95, samples=5 00:18:28.458 lat (usec) : 750=1.47%, 1000=31.70% 00:18:28.458 lat (msec) : 2=66.75%, 50=0.04% 00:18:28.458 cpu : usr=0.75%, sys=2.85%, ctx=2727, majf=0, minf=1 00:18:28.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 issued rwts: total=2722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.458 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1535433: Mon Jun 10 11:25:25 2024 00:18:28.458 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(308KiB/3186msec) 00:18:28.458 slat (usec): min=24, max=233, avg=32.30, stdev=37.20 00:18:28.458 clat (usec): min=999, max=42107, avg=41321.57, stdev=4664.68 00:18:28.458 lat (usec): min=1035, max=42132, avg=41351.68, stdev=4663.97 00:18:28.458 clat percentiles (usec): 00:18:28.458 | 1.00th=[ 996], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:18:28.458 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:28.458 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:28.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:28.458 | 99.99th=[42206] 00:18:28.458 bw ( KiB/s): min= 96, max= 99, per=0.99%, avg=96.50, stdev= 1.22, samples=6 00:18:28.458 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:18:28.458 lat (usec) : 1000=1.28% 00:18:28.458 lat (msec) : 50=97.44% 00:18:28.458 cpu : usr=0.09%, sys=0.00%, ctx=81, majf=0, minf=1 00:18:28.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.458 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1535470: Mon Jun 10 11:25:25 2024 00:18:28.458 read: IOPS=915, BW=3661KiB/s (3749kB/s)(9.84MiB/2753msec) 00:18:28.458 slat (usec): min=6, max=11890, avg=33.72, stdev=320.49 00:18:28.458 clat (usec): min=538, max=1536, avg=1051.82, stdev=93.95 00:18:28.458 lat (usec): min=562, max=12976, avg=1085.54, stdev=335.22 00:18:28.458 clat percentiles (usec): 00:18:28.458 | 1.00th=[ 791], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 988], 00:18:28.458 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:18:28.458 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:18:28.458 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1369], 99.95th=[ 1369], 00:18:28.458 | 99.99th=[ 1532] 00:18:28.458 bw ( KiB/s): min= 3640, max= 3752, per=38.25%, avg=3691.20, stdev=45.82, samples=5 00:18:28.458 iops : min= 910, max= 938, avg=922.80, stdev=11.45, samples=5 00:18:28.458 lat (usec) : 750=0.28%, 1000=25.59% 00:18:28.458 lat (msec) : 2=74.10% 00:18:28.458 cpu : usr=0.87%, sys=2.76%, ctx=2523, majf=0, minf=1 00:18:28.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.458 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1535483: Mon Jun 10 11:25:25 2024 00:18:28.458 read: IOPS=921, BW=3684KiB/s (3773kB/s)(9472KiB/2571msec) 00:18:28.458 slat (nsec): min=6995, max=63039, avg=24806.20, stdev=2882.23 00:18:28.458 clat (usec): min=667, max=1394, avg=1054.13, stdev=92.21 00:18:28.458 lat (usec): min=691, max=1419, avg=1078.93, stdev=92.23 00:18:28.458 clat percentiles (usec): 00:18:28.458 | 1.00th=[ 783], 5.00th=[ 889], 10.00th=[ 938], 20.00th=[ 988], 00:18:28.458 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:18:28.458 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:18:28.458 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1352], 00:18:28.458 | 99.99th=[ 1401] 00:18:28.458 bw ( KiB/s): min= 3656, max= 3712, per=38.20%, avg=3686.40, stdev=24.92, samples=5 00:18:28.458 iops : min= 914, max= 928, avg=921.60, stdev= 6.23, samples=5 00:18:28.458 lat (usec) : 750=0.72%, 1000=21.87% 00:18:28.458 lat (msec) : 2=77.37% 00:18:28.458 cpu : usr=1.05%, sys=2.61%, ctx=2369, majf=0, minf=2 00:18:28.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.458 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.458 00:18:28.458 Run status group 0 (all jobs): 00:18:28.458 READ: bw=9650KiB/s (9881kB/s), 96.7KiB/s-3691KiB/s (99.0kB/s-3779kB/s), io=30.0MiB (31.5MB), run=2571-3186msec 00:18:28.458 00:18:28.458 Disk stats (read/write): 00:18:28.458 nvme0n1: ios=2550/0, merge=0/0, ticks=2591/0, in_queue=2591, util=91.22% 00:18:28.458 nvme0n2: ios=73/0, merge=0/0, ticks=3016/0, in_queue=3016, util=94.21% 00:18:28.458 nvme0n3: ios=2329/0, merge=0/0, ticks=2345/0, in_queue=2345, util=95.50% 00:18:28.458 nvme0n4: ios=2327/0, merge=0/0, ticks=2385/0, in_queue=2385, util=96.30% 00:18:28.719 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.719 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:28.980 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.980 11:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:29.241 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:29.241 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:29.241 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:29.241 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1535204 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:29.501 nvmf hotplug test: fio failed as expected 00:18:29.501 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.761 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.761 rmmod nvme_tcp 00:18:29.761 rmmod nvme_fabrics 00:18:29.761 rmmod nvme_keyring 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1532034 ']' 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1532034 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1532034 ']' 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1532034 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:18:30.021 11:25:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1532034 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1532034' 00:18:30.021 killing process with pid 1532034 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1532034 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1532034 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.021 11:25:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.563 11:25:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.564 00:18:32.564 real 0m29.959s 00:18:32.564 user 2m9.382s 00:18:32.564 sys 0m10.049s 00:18:32.564 11:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:32.564 11:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.564 ************************************ 00:18:32.564 END TEST nvmf_fio_target 00:18:32.564 ************************************ 00:18:32.564 11:25:29 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:32.564 11:25:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:32.564 11:25:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:32.564 11:25:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:32.564 ************************************ 00:18:32.564 START TEST nvmf_bdevio 00:18:32.564 ************************************ 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:32.564 * Looking for test storage... 00:18:32.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.564 11:25:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:40.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:40.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.705 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:40.706 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:40.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:18:40.706 00:18:40.706 --- 10.0.0.2 ping statistics --- 00:18:40.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.706 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:18:40.706 00:18:40.706 --- 10.0.0.1 ping statistics --- 00:18:40.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.706 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1540779 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1540779 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1540779 ']' 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:40.706 11:25:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:40.706 [2024-06-10 11:25:37.517364] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:18:40.706 [2024-06-10 11:25:37.517422] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.706 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.706 [2024-06-10 11:25:37.609603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.706 [2024-06-10 11:25:37.698700] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.706 [2024-06-10 11:25:37.698755] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.706 [2024-06-10 11:25:37.698764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.706 [2024-06-10 11:25:37.698770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.706 [2024-06-10 11:25:37.698776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.706 [2024-06-10 11:25:37.698929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:18:40.706 [2024-06-10 11:25:37.699194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:18:40.706 [2024-06-10 11:25:37.699345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:18:40.706 [2024-06-10 11:25:37.699346] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.287 [2024-06-10 11:25:38.430349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.287 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.288 Malloc0 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:41.288 [2024-06-10 11:25:38.482932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.288 { 00:18:41.288 "params": { 00:18:41.288 "name": "Nvme$subsystem", 00:18:41.288 "trtype": "$TEST_TRANSPORT", 00:18:41.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.288 "adrfam": "ipv4", 00:18:41.288 "trsvcid": "$NVMF_PORT", 00:18:41.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.288 "hdgst": ${hdgst:-false}, 00:18:41.288 "ddgst": ${ddgst:-false} 00:18:41.288 }, 00:18:41.288 "method": "bdev_nvme_attach_controller" 00:18:41.288 } 00:18:41.288 EOF 00:18:41.288 )") 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:41.288 11:25:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:41.288 "params": { 00:18:41.288 "name": "Nvme1", 00:18:41.288 "trtype": "tcp", 00:18:41.288 "traddr": "10.0.0.2", 00:18:41.288 "adrfam": "ipv4", 00:18:41.288 "trsvcid": "4420", 00:18:41.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.288 "hdgst": false, 00:18:41.288 "ddgst": false 00:18:41.288 }, 00:18:41.288 "method": "bdev_nvme_attach_controller" 00:18:41.288 }' 00:18:41.549 [2024-06-10 11:25:38.542017] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:18:41.549 [2024-06-10 11:25:38.542103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540827 ] 00:18:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.549 [2024-06-10 11:25:38.644475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.549 [2024-06-10 11:25:38.740173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.549 [2024-06-10 11:25:38.740298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.549 [2024-06-10 11:25:38.740302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.809 I/O targets: 00:18:41.809 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:41.809 00:18:41.809 00:18:41.809 CUnit - A unit testing framework for C - Version 2.1-3 00:18:41.809 http://cunit.sourceforge.net/ 00:18:41.809 00:18:41.809 00:18:41.809 Suite: bdevio tests on: Nvme1n1 00:18:41.809 Test: blockdev write read block ...passed 00:18:41.809 Test: blockdev write zeroes read block ...passed 00:18:41.809 Test: blockdev write zeroes read no split ...passed 00:18:42.068 Test: blockdev write zeroes read split ...passed 00:18:42.068 Test: blockdev write zeroes read split partial ...passed 00:18:42.068 Test: blockdev reset ...[2024-06-10 11:25:39.096502] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.068 [2024-06-10 11:25:39.096561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1452270 (9): Bad file descriptor 00:18:42.068 [2024-06-10 11:25:39.115055] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.068 passed 00:18:42.068 Test: blockdev write read 8 blocks ...passed 00:18:42.068 Test: blockdev write read size > 128k ...passed 00:18:42.068 Test: blockdev write read invalid size ...passed 00:18:42.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:42.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:42.068 Test: blockdev write read max offset ...passed 00:18:42.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:42.328 Test: blockdev writev readv 8 blocks ...passed 00:18:42.328 Test: blockdev writev readv 30 x 1block ...passed 00:18:42.328 Test: blockdev writev readv block ...passed 00:18:42.328 Test: blockdev writev readv size > 128k ...passed 00:18:42.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:42.328 Test: blockdev comparev and writev ...[2024-06-10 11:25:39.379002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.379028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.379039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.379570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.379576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.380073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.380081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.380557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.380575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.328 [2024-06-10 11:25:39.380580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.328 passed 00:18:42.328 Test: blockdev nvme passthru rw ...passed 00:18:42.328 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:25:39.465530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.328 [2024-06-10 11:25:39.465542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.328 [2024-06-10 11:25:39.465886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.329 [2024-06-10 11:25:39.465896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.329 [2024-06-10 11:25:39.466258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.329 [2024-06-10 11:25:39.466266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.329 [2024-06-10 11:25:39.466626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.329 [2024-06-10 11:25:39.466634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.329 passed 00:18:42.329 Test: blockdev nvme admin passthru ...passed 00:18:42.329 Test: blockdev copy ...passed 00:18:42.329 00:18:42.329 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.329 suites 1 1 n/a 0 0 00:18:42.329 tests 23 23 23 0 0 00:18:42.329 asserts 152 152 152 0 n/a 00:18:42.329 00:18:42.329 Elapsed time = 1.248 seconds 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.589 rmmod nvme_tcp 00:18:42.589 rmmod nvme_fabrics 00:18:42.589 rmmod nvme_keyring 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1540779 ']' 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1540779 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1540779 ']' 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1540779 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1540779 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1540779' 00:18:42.589 killing process with pid 1540779 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1540779 00:18:42.589 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1540779 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.849 11:25:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.389 11:25:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.389 00:18:45.389 real 0m12.682s 00:18:45.389 user 0m13.225s 00:18:45.389 sys 0m6.505s 00:18:45.389 11:25:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:45.389 11:25:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.389 ************************************ 00:18:45.389 END TEST nvmf_bdevio 00:18:45.389 ************************************ 00:18:45.389 11:25:42 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:45.389 11:25:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:45.389 11:25:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:45.389 11:25:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.389 ************************************ 00:18:45.389 START TEST nvmf_auth_target 00:18:45.389 ************************************ 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:45.389 * Looking for test storage... 00:18:45.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.389 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.390 11:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:53.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:53.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.525 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:53.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:53.526 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:18:53.526 00:18:53.526 --- 10.0.0.2 ping statistics --- 00:18:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.526 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:18:53.526 00:18:53.526 --- 10.0.0.1 ping statistics --- 00:18:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.526 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1545494 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1545494 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1545494 ']' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:53.526 11:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1545648 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfdfb9196d367003378cbe07ab4ee51514399fe0dd4a77de 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WRX 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfdfb9196d367003378cbe07ab4ee51514399fe0dd4a77de 0 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfdfb9196d367003378cbe07ab4ee51514399fe0dd4a77de 0 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfdfb9196d367003378cbe07ab4ee51514399fe0dd4a77de 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.467 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WRX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WRX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.WRX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09b5133833ca360488208567f215a59caba20ca913cae30dbcd9116a370f5520 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8XR 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09b5133833ca360488208567f215a59caba20ca913cae30dbcd9116a370f5520 3 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09b5133833ca360488208567f215a59caba20ca913cae30dbcd9116a370f5520 3 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09b5133833ca360488208567f215a59caba20ca913cae30dbcd9116a370f5520 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8XR 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8XR 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8XR 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5a2db43ceee5b7f3e0900b9b161bb539 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fKn 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5a2db43ceee5b7f3e0900b9b161bb539 1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5a2db43ceee5b7f3e0900b9b161bb539 1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5a2db43ceee5b7f3e0900b9b161bb539 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fKn 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fKn 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fKn 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b720bac4deb4516fd25f0107a3738f9070a86ec677501d79 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UlM 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b720bac4deb4516fd25f0107a3738f9070a86ec677501d79 2 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b720bac4deb4516fd25f0107a3738f9070a86ec677501d79 2 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b720bac4deb4516fd25f0107a3738f9070a86ec677501d79 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UlM 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UlM 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.UlM 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.468 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fe6dad997e4cd4f6ec20c739630364aaa5a7ca92fd7ecaac 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KnJ 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fe6dad997e4cd4f6ec20c739630364aaa5a7ca92fd7ecaac 2 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fe6dad997e4cd4f6ec20c739630364aaa5a7ca92fd7ecaac 2 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fe6dad997e4cd4f6ec20c739630364aaa5a7ca92fd7ecaac 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KnJ 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KnJ 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.KnJ 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.728 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c70e38fe779c338f8c6ec02eafeb246c 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RoN 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c70e38fe779c338f8c6ec02eafeb246c 1 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c70e38fe779c338f8c6ec02eafeb246c 1 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c70e38fe779c338f8c6ec02eafeb246c 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RoN 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RoN 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.RoN 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8d332c162dad59f408acfad3f95dedd1f054a04ac0fcf65eff9e2bf22f2cbec 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PcR 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8d332c162dad59f408acfad3f95dedd1f054a04ac0fcf65eff9e2bf22f2cbec 3 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8d332c162dad59f408acfad3f95dedd1f054a04ac0fcf65eff9e2bf22f2cbec 3 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8d332c162dad59f408acfad3f95dedd1f054a04ac0fcf65eff9e2bf22f2cbec 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PcR 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PcR 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PcR 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1545494 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1545494 ']' 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:54.729 11:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1545648 /var/tmp/host.sock 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1545648 ']' 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:54.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:54.991 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WRX 00:18:55.251 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.252 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.252 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.252 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WRX 00:18:55.252 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WRX 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8XR ]] 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8XR 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8XR 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8XR 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fKn 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fKn 00:18:55.512 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fKn 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.UlM ]] 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UlM 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UlM 00:18:55.775 11:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UlM 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KnJ 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KnJ 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KnJ 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.RoN ]] 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RoN 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RoN 00:18:56.108 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RoN 00:18:56.368 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:56.368 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PcR 00:18:56.369 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.369 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.369 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.369 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PcR 00:18:56.369 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PcR 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.630 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.890 11:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.151 00:18:57.151 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.151 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.151 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.412 { 00:18:57.412 "cntlid": 1, 00:18:57.412 "qid": 0, 00:18:57.412 "state": "enabled", 00:18:57.412 "listen_address": { 00:18:57.412 "trtype": "TCP", 00:18:57.412 "adrfam": "IPv4", 00:18:57.412 "traddr": "10.0.0.2", 00:18:57.412 "trsvcid": "4420" 00:18:57.412 }, 00:18:57.412 "peer_address": { 00:18:57.412 "trtype": "TCP", 00:18:57.412 "adrfam": "IPv4", 00:18:57.412 "traddr": "10.0.0.1", 00:18:57.412 "trsvcid": "50654" 00:18:57.412 }, 00:18:57.412 "auth": { 00:18:57.412 "state": "completed", 00:18:57.412 "digest": "sha256", 00:18:57.412 "dhgroup": "null" 00:18:57.412 } 00:18:57.412 } 00:18:57.412 ]' 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.412 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.673 11:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:18:58.243 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.244 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.503 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.763 00:18:58.763 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.763 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.763 11:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.024 { 00:18:59.024 "cntlid": 3, 00:18:59.024 "qid": 0, 00:18:59.024 "state": "enabled", 00:18:59.024 "listen_address": { 00:18:59.024 "trtype": "TCP", 00:18:59.024 "adrfam": "IPv4", 00:18:59.024 "traddr": "10.0.0.2", 00:18:59.024 "trsvcid": "4420" 00:18:59.024 }, 00:18:59.024 "peer_address": { 00:18:59.024 "trtype": "TCP", 00:18:59.024 "adrfam": "IPv4", 00:18:59.024 "traddr": "10.0.0.1", 00:18:59.024 "trsvcid": "50672" 00:18:59.024 }, 00:18:59.024 "auth": { 00:18:59.024 "state": "completed", 00:18:59.024 "digest": "sha256", 00:18:59.024 "dhgroup": "null" 00:18:59.024 } 00:18:59.024 } 00:18:59.024 ]' 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.024 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.285 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.285 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.285 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.285 11:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.227 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.490 00:19:00.490 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.490 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.490 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.750 { 00:19:00.750 "cntlid": 5, 00:19:00.750 "qid": 0, 00:19:00.750 "state": "enabled", 00:19:00.750 "listen_address": { 00:19:00.750 "trtype": "TCP", 00:19:00.750 "adrfam": "IPv4", 00:19:00.750 "traddr": "10.0.0.2", 00:19:00.750 "trsvcid": "4420" 00:19:00.750 }, 00:19:00.750 "peer_address": { 00:19:00.750 "trtype": "TCP", 00:19:00.750 "adrfam": "IPv4", 00:19:00.750 "traddr": "10.0.0.1", 00:19:00.750 "trsvcid": "50688" 00:19:00.750 }, 00:19:00.750 "auth": { 00:19:00.750 "state": "completed", 00:19:00.750 "digest": "sha256", 00:19:00.750 "dhgroup": "null" 00:19:00.750 } 00:19:00.750 } 00:19:00.750 ]' 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.750 11:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.010 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.951 11:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.951 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.211 00:19:02.211 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.211 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.211 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.471 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.472 { 00:19:02.472 "cntlid": 7, 00:19:02.472 "qid": 0, 00:19:02.472 "state": "enabled", 00:19:02.472 "listen_address": { 00:19:02.472 "trtype": "TCP", 00:19:02.472 "adrfam": "IPv4", 00:19:02.472 "traddr": "10.0.0.2", 00:19:02.472 "trsvcid": "4420" 00:19:02.472 }, 00:19:02.472 "peer_address": { 00:19:02.472 "trtype": "TCP", 00:19:02.472 "adrfam": "IPv4", 00:19:02.472 "traddr": "10.0.0.1", 00:19:02.472 "trsvcid": "50722" 00:19:02.472 }, 00:19:02.472 "auth": { 00:19:02.472 "state": "completed", 00:19:02.472 "digest": "sha256", 00:19:02.472 "dhgroup": "null" 00:19:02.472 } 00:19:02.472 } 00:19:02.472 ]' 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.472 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.732 11:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:03.301 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.560 11:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.820 00:19:03.820 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.820 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.820 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.080 { 00:19:04.080 "cntlid": 9, 00:19:04.080 "qid": 0, 00:19:04.080 "state": "enabled", 00:19:04.080 "listen_address": { 00:19:04.080 "trtype": "TCP", 00:19:04.080 "adrfam": "IPv4", 00:19:04.080 "traddr": "10.0.0.2", 00:19:04.080 "trsvcid": "4420" 00:19:04.080 }, 00:19:04.080 "peer_address": { 00:19:04.080 "trtype": "TCP", 00:19:04.080 "adrfam": "IPv4", 00:19:04.080 "traddr": "10.0.0.1", 00:19:04.080 "trsvcid": "40208" 00:19:04.080 }, 00:19:04.080 "auth": { 00:19:04.080 "state": "completed", 00:19:04.080 "digest": "sha256", 00:19:04.080 "dhgroup": "ffdhe2048" 00:19:04.080 } 00:19:04.080 } 00:19:04.080 ]' 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.080 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.342 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.342 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.342 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.342 11:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.283 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.544 00:19:05.544 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.544 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.544 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.804 { 00:19:05.804 "cntlid": 11, 00:19:05.804 "qid": 0, 00:19:05.804 "state": "enabled", 00:19:05.804 "listen_address": { 00:19:05.804 "trtype": "TCP", 00:19:05.804 "adrfam": "IPv4", 00:19:05.804 "traddr": "10.0.0.2", 00:19:05.804 "trsvcid": "4420" 00:19:05.804 }, 00:19:05.804 "peer_address": { 00:19:05.804 "trtype": "TCP", 00:19:05.804 "adrfam": "IPv4", 00:19:05.804 "traddr": "10.0.0.1", 00:19:05.804 "trsvcid": "40222" 00:19:05.804 }, 00:19:05.804 "auth": { 00:19:05.804 "state": "completed", 00:19:05.804 "digest": "sha256", 00:19:05.804 "dhgroup": "ffdhe2048" 00:19:05.804 } 00:19:05.804 } 00:19:05.804 ]' 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.804 11:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.804 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.804 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.065 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.065 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.065 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.065 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.007 11:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.007 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.268 00:19:07.268 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.268 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.268 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.529 { 00:19:07.529 "cntlid": 13, 00:19:07.529 "qid": 0, 00:19:07.529 "state": "enabled", 00:19:07.529 "listen_address": { 00:19:07.529 "trtype": "TCP", 00:19:07.529 "adrfam": "IPv4", 00:19:07.529 "traddr": "10.0.0.2", 00:19:07.529 "trsvcid": "4420" 00:19:07.529 }, 00:19:07.529 "peer_address": { 00:19:07.529 "trtype": "TCP", 00:19:07.529 "adrfam": "IPv4", 00:19:07.529 "traddr": "10.0.0.1", 00:19:07.529 "trsvcid": "40258" 00:19:07.529 }, 00:19:07.529 "auth": { 00:19:07.529 "state": "completed", 00:19:07.529 "digest": "sha256", 00:19:07.529 "dhgroup": "ffdhe2048" 00:19:07.529 } 00:19:07.529 } 00:19:07.529 ]' 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.529 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.791 11:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:08.361 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.621 11:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.882 00:19:08.882 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.882 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.882 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.142 { 00:19:09.142 "cntlid": 15, 00:19:09.142 "qid": 0, 00:19:09.142 "state": "enabled", 00:19:09.142 "listen_address": { 00:19:09.142 "trtype": "TCP", 00:19:09.142 "adrfam": "IPv4", 00:19:09.142 "traddr": "10.0.0.2", 00:19:09.142 "trsvcid": "4420" 00:19:09.142 }, 00:19:09.142 "peer_address": { 00:19:09.142 "trtype": "TCP", 00:19:09.142 "adrfam": "IPv4", 00:19:09.142 "traddr": "10.0.0.1", 00:19:09.142 "trsvcid": "40268" 00:19:09.142 }, 00:19:09.142 "auth": { 00:19:09.142 "state": "completed", 00:19:09.142 "digest": "sha256", 00:19:09.142 "dhgroup": "ffdhe2048" 00:19:09.142 } 00:19:09.142 } 00:19:09.142 ]' 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.142 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.403 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.403 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.403 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.403 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.403 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.404 11:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.347 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.608 00:19:10.608 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.608 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.608 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.869 { 00:19:10.869 "cntlid": 17, 00:19:10.869 "qid": 0, 00:19:10.869 "state": "enabled", 00:19:10.869 "listen_address": { 00:19:10.869 "trtype": "TCP", 00:19:10.869 "adrfam": "IPv4", 00:19:10.869 "traddr": "10.0.0.2", 00:19:10.869 "trsvcid": "4420" 00:19:10.869 }, 00:19:10.869 "peer_address": { 00:19:10.869 "trtype": "TCP", 00:19:10.869 "adrfam": "IPv4", 00:19:10.869 "traddr": "10.0.0.1", 00:19:10.869 "trsvcid": "40302" 00:19:10.869 }, 00:19:10.869 "auth": { 00:19:10.869 "state": "completed", 00:19:10.869 "digest": "sha256", 00:19:10.869 "dhgroup": "ffdhe3072" 00:19:10.869 } 00:19:10.869 } 00:19:10.869 ]' 00:19:10.869 11:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.869 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.129 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.069 11:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.069 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.329 00:19:12.329 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.329 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.329 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.589 { 00:19:12.589 "cntlid": 19, 00:19:12.589 "qid": 0, 00:19:12.589 "state": "enabled", 00:19:12.589 "listen_address": { 00:19:12.589 "trtype": "TCP", 00:19:12.589 "adrfam": "IPv4", 00:19:12.589 "traddr": "10.0.0.2", 00:19:12.589 "trsvcid": "4420" 00:19:12.589 }, 00:19:12.589 "peer_address": { 00:19:12.589 "trtype": "TCP", 00:19:12.589 "adrfam": "IPv4", 00:19:12.589 "traddr": "10.0.0.1", 00:19:12.589 "trsvcid": "40326" 00:19:12.589 }, 00:19:12.589 "auth": { 00:19:12.589 "state": "completed", 00:19:12.589 "digest": "sha256", 00:19:12.589 "dhgroup": "ffdhe3072" 00:19:12.589 } 00:19:12.589 } 00:19:12.589 ]' 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.589 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.590 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.590 11:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.851 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.793 11:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.053 00:19:14.053 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.053 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.053 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.313 { 00:19:14.313 "cntlid": 21, 00:19:14.313 "qid": 0, 00:19:14.313 "state": "enabled", 00:19:14.313 "listen_address": { 00:19:14.313 "trtype": "TCP", 00:19:14.313 "adrfam": "IPv4", 00:19:14.313 "traddr": "10.0.0.2", 00:19:14.313 "trsvcid": "4420" 00:19:14.313 }, 00:19:14.313 "peer_address": { 00:19:14.313 "trtype": "TCP", 00:19:14.313 "adrfam": "IPv4", 00:19:14.313 "traddr": "10.0.0.1", 00:19:14.313 "trsvcid": "47878" 00:19:14.313 }, 00:19:14.313 "auth": { 00:19:14.313 "state": "completed", 00:19:14.313 "digest": "sha256", 00:19:14.313 "dhgroup": "ffdhe3072" 00:19:14.313 } 00:19:14.313 } 00:19:14.313 ]' 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.313 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.573 11:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.144 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.404 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.665 00:19:15.665 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.665 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.665 11:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.926 { 00:19:15.926 "cntlid": 23, 00:19:15.926 "qid": 0, 00:19:15.926 "state": "enabled", 00:19:15.926 "listen_address": { 00:19:15.926 "trtype": "TCP", 00:19:15.926 "adrfam": "IPv4", 00:19:15.926 "traddr": "10.0.0.2", 00:19:15.926 "trsvcid": "4420" 00:19:15.926 }, 00:19:15.926 "peer_address": { 00:19:15.926 "trtype": "TCP", 00:19:15.926 "adrfam": "IPv4", 00:19:15.926 "traddr": "10.0.0.1", 00:19:15.926 "trsvcid": "47902" 00:19:15.926 }, 00:19:15.926 "auth": { 00:19:15.926 "state": "completed", 00:19:15.926 "digest": "sha256", 00:19:15.926 "dhgroup": "ffdhe3072" 00:19:15.926 } 00:19:15.926 } 00:19:15.926 ]' 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.926 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.186 11:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.127 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.388 00:19:17.388 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.388 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.388 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.648 { 00:19:17.648 "cntlid": 25, 00:19:17.648 "qid": 0, 00:19:17.648 "state": "enabled", 00:19:17.648 "listen_address": { 00:19:17.648 "trtype": "TCP", 00:19:17.648 "adrfam": "IPv4", 00:19:17.648 "traddr": "10.0.0.2", 00:19:17.648 "trsvcid": "4420" 00:19:17.648 }, 00:19:17.648 "peer_address": { 00:19:17.648 "trtype": "TCP", 00:19:17.648 "adrfam": "IPv4", 00:19:17.648 "traddr": "10.0.0.1", 00:19:17.648 "trsvcid": "47944" 00:19:17.648 }, 00:19:17.648 "auth": { 00:19:17.648 "state": "completed", 00:19:17.648 "digest": "sha256", 00:19:17.648 "dhgroup": "ffdhe4096" 00:19:17.648 } 00:19:17.648 } 00:19:17.648 ]' 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.648 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.908 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.908 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.908 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.908 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.908 11:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.168 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.738 11:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.999 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.259 00:19:19.259 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.259 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.259 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.519 { 00:19:19.519 "cntlid": 27, 00:19:19.519 "qid": 0, 00:19:19.519 "state": "enabled", 00:19:19.519 "listen_address": { 00:19:19.519 "trtype": "TCP", 00:19:19.519 "adrfam": "IPv4", 00:19:19.519 "traddr": "10.0.0.2", 00:19:19.519 "trsvcid": "4420" 00:19:19.519 }, 00:19:19.519 "peer_address": { 00:19:19.519 "trtype": "TCP", 00:19:19.519 "adrfam": "IPv4", 00:19:19.519 "traddr": "10.0.0.1", 00:19:19.519 "trsvcid": "47964" 00:19:19.519 }, 00:19:19.519 "auth": { 00:19:19.519 "state": "completed", 00:19:19.519 "digest": "sha256", 00:19:19.519 "dhgroup": "ffdhe4096" 00:19:19.519 } 00:19:19.519 } 00:19:19.519 ]' 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.519 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.780 11:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:20.421 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.422 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.682 11:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.943 00:19:20.943 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.943 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.943 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.215 { 00:19:21.215 "cntlid": 29, 00:19:21.215 "qid": 0, 00:19:21.215 "state": "enabled", 00:19:21.215 "listen_address": { 00:19:21.215 "trtype": "TCP", 00:19:21.215 "adrfam": "IPv4", 00:19:21.215 "traddr": "10.0.0.2", 00:19:21.215 "trsvcid": "4420" 00:19:21.215 }, 00:19:21.215 "peer_address": { 00:19:21.215 "trtype": "TCP", 00:19:21.215 "adrfam": "IPv4", 00:19:21.215 "traddr": "10.0.0.1", 00:19:21.215 "trsvcid": "47994" 00:19:21.215 }, 00:19:21.215 "auth": { 00:19:21.215 "state": "completed", 00:19:21.215 "digest": "sha256", 00:19:21.215 "dhgroup": "ffdhe4096" 00:19:21.215 } 00:19:21.215 } 00:19:21.215 ]' 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.215 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.476 11:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:22.048 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.048 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:22.048 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.048 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.308 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.569 00:19:22.569 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.569 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.569 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.829 { 00:19:22.829 "cntlid": 31, 00:19:22.829 "qid": 0, 00:19:22.829 "state": "enabled", 00:19:22.829 "listen_address": { 00:19:22.829 "trtype": "TCP", 00:19:22.829 "adrfam": "IPv4", 00:19:22.829 "traddr": "10.0.0.2", 00:19:22.829 "trsvcid": "4420" 00:19:22.829 }, 00:19:22.829 "peer_address": { 00:19:22.829 "trtype": "TCP", 00:19:22.829 "adrfam": "IPv4", 00:19:22.829 "traddr": "10.0.0.1", 00:19:22.829 "trsvcid": "48020" 00:19:22.829 }, 00:19:22.829 "auth": { 00:19:22.829 "state": "completed", 00:19:22.829 "digest": "sha256", 00:19:22.829 "dhgroup": "ffdhe4096" 00:19:22.829 } 00:19:22.829 } 00:19:22.829 ]' 00:19:22.829 11:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.829 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.829 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.090 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.032 11:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.032 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.033 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.604 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.604 { 00:19:24.604 "cntlid": 33, 00:19:24.604 "qid": 0, 00:19:24.604 "state": "enabled", 00:19:24.604 "listen_address": { 00:19:24.604 "trtype": "TCP", 00:19:24.604 "adrfam": "IPv4", 00:19:24.604 "traddr": "10.0.0.2", 00:19:24.604 "trsvcid": "4420" 00:19:24.604 }, 00:19:24.604 "peer_address": { 00:19:24.604 "trtype": "TCP", 00:19:24.604 "adrfam": "IPv4", 00:19:24.604 "traddr": "10.0.0.1", 00:19:24.604 "trsvcid": "35618" 00:19:24.604 }, 00:19:24.604 "auth": { 00:19:24.604 "state": "completed", 00:19:24.604 "digest": "sha256", 00:19:24.604 "dhgroup": "ffdhe6144" 00:19:24.604 } 00:19:24.604 } 00:19:24.604 ]' 00:19:24.604 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.864 11:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.123 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.694 11:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.954 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.215 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.476 { 00:19:26.476 "cntlid": 35, 00:19:26.476 "qid": 0, 00:19:26.476 "state": "enabled", 00:19:26.476 "listen_address": { 00:19:26.476 "trtype": "TCP", 00:19:26.476 "adrfam": "IPv4", 00:19:26.476 "traddr": "10.0.0.2", 00:19:26.476 "trsvcid": "4420" 00:19:26.476 }, 00:19:26.476 "peer_address": { 00:19:26.476 "trtype": "TCP", 00:19:26.476 "adrfam": "IPv4", 00:19:26.476 "traddr": "10.0.0.1", 00:19:26.476 "trsvcid": "35660" 00:19:26.476 }, 00:19:26.476 "auth": { 00:19:26.476 "state": "completed", 00:19:26.476 "digest": "sha256", 00:19:26.476 "dhgroup": "ffdhe6144" 00:19:26.476 } 00:19:26.476 } 00:19:26.476 ]' 00:19:26.476 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.737 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.997 11:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.568 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.829 11:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.090 00:19:28.090 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.090 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.090 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.351 { 00:19:28.351 "cntlid": 37, 00:19:28.351 "qid": 0, 00:19:28.351 "state": "enabled", 00:19:28.351 "listen_address": { 00:19:28.351 "trtype": "TCP", 00:19:28.351 "adrfam": "IPv4", 00:19:28.351 "traddr": "10.0.0.2", 00:19:28.351 "trsvcid": "4420" 00:19:28.351 }, 00:19:28.351 "peer_address": { 00:19:28.351 "trtype": "TCP", 00:19:28.351 "adrfam": "IPv4", 00:19:28.351 "traddr": "10.0.0.1", 00:19:28.351 "trsvcid": "35678" 00:19:28.351 }, 00:19:28.351 "auth": { 00:19:28.351 "state": "completed", 00:19:28.351 "digest": "sha256", 00:19:28.351 "dhgroup": "ffdhe6144" 00:19:28.351 } 00:19:28.351 } 00:19:28.351 ]' 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.351 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.612 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.612 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.612 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.612 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.612 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.873 11:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.444 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.704 11:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.965 00:19:29.965 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.965 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.965 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.225 { 00:19:30.225 "cntlid": 39, 00:19:30.225 "qid": 0, 00:19:30.225 "state": "enabled", 00:19:30.225 "listen_address": { 00:19:30.225 "trtype": "TCP", 00:19:30.225 "adrfam": "IPv4", 00:19:30.225 "traddr": "10.0.0.2", 00:19:30.225 "trsvcid": "4420" 00:19:30.225 }, 00:19:30.225 "peer_address": { 00:19:30.225 "trtype": "TCP", 00:19:30.225 "adrfam": "IPv4", 00:19:30.225 "traddr": "10.0.0.1", 00:19:30.225 "trsvcid": "35714" 00:19:30.225 }, 00:19:30.225 "auth": { 00:19:30.225 "state": "completed", 00:19:30.225 "digest": "sha256", 00:19:30.225 "dhgroup": "ffdhe6144" 00:19:30.225 } 00:19:30.225 } 00:19:30.225 ]' 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.225 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.485 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.485 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.485 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.486 11:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.429 11:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.000 00:19:32.000 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.000 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.000 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.261 { 00:19:32.261 "cntlid": 41, 00:19:32.261 "qid": 0, 00:19:32.261 "state": "enabled", 00:19:32.261 "listen_address": { 00:19:32.261 "trtype": "TCP", 00:19:32.261 "adrfam": "IPv4", 00:19:32.261 "traddr": "10.0.0.2", 00:19:32.261 "trsvcid": "4420" 00:19:32.261 }, 00:19:32.261 "peer_address": { 00:19:32.261 "trtype": "TCP", 00:19:32.261 "adrfam": "IPv4", 00:19:32.261 "traddr": "10.0.0.1", 00:19:32.261 "trsvcid": "35728" 00:19:32.261 }, 00:19:32.261 "auth": { 00:19:32.261 "state": "completed", 00:19:32.261 "digest": "sha256", 00:19:32.261 "dhgroup": "ffdhe8192" 00:19:32.261 } 00:19:32.261 } 00:19:32.261 ]' 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.261 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.522 11:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.093 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.353 11:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.922 00:19:33.922 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.922 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.922 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.182 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.182 { 00:19:34.182 "cntlid": 43, 00:19:34.182 "qid": 0, 00:19:34.182 "state": "enabled", 00:19:34.182 "listen_address": { 00:19:34.182 "trtype": "TCP", 00:19:34.182 "adrfam": "IPv4", 00:19:34.182 "traddr": "10.0.0.2", 00:19:34.182 "trsvcid": "4420" 00:19:34.182 }, 00:19:34.182 "peer_address": { 00:19:34.182 "trtype": "TCP", 00:19:34.182 "adrfam": "IPv4", 00:19:34.182 "traddr": "10.0.0.1", 00:19:34.182 "trsvcid": "35032" 00:19:34.182 }, 00:19:34.182 "auth": { 00:19:34.183 "state": "completed", 00:19:34.183 "digest": "sha256", 00:19:34.183 "dhgroup": "ffdhe8192" 00:19:34.183 } 00:19:34.183 } 00:19:34.183 ]' 00:19:34.183 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.183 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.183 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.442 11:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.384 11:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.953 00:19:35.953 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.953 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.953 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.213 { 00:19:36.213 "cntlid": 45, 00:19:36.213 "qid": 0, 00:19:36.213 "state": "enabled", 00:19:36.213 "listen_address": { 00:19:36.213 "trtype": "TCP", 00:19:36.213 "adrfam": "IPv4", 00:19:36.213 "traddr": "10.0.0.2", 00:19:36.213 "trsvcid": "4420" 00:19:36.213 }, 00:19:36.213 "peer_address": { 00:19:36.213 "trtype": "TCP", 00:19:36.213 "adrfam": "IPv4", 00:19:36.213 "traddr": "10.0.0.1", 00:19:36.213 "trsvcid": "35050" 00:19:36.213 }, 00:19:36.213 "auth": { 00:19:36.213 "state": "completed", 00:19:36.213 "digest": "sha256", 00:19:36.213 "dhgroup": "ffdhe8192" 00:19:36.213 } 00:19:36.213 } 00:19:36.213 ]' 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.213 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.473 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.473 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.473 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.473 11:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.416 11:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.986 00:19:37.986 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.986 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.986 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.247 { 00:19:38.247 "cntlid": 47, 00:19:38.247 "qid": 0, 00:19:38.247 "state": "enabled", 00:19:38.247 "listen_address": { 00:19:38.247 "trtype": "TCP", 00:19:38.247 "adrfam": "IPv4", 00:19:38.247 "traddr": "10.0.0.2", 00:19:38.247 "trsvcid": "4420" 00:19:38.247 }, 00:19:38.247 "peer_address": { 00:19:38.247 "trtype": "TCP", 00:19:38.247 "adrfam": "IPv4", 00:19:38.247 "traddr": "10.0.0.1", 00:19:38.247 "trsvcid": "35080" 00:19:38.247 }, 00:19:38.247 "auth": { 00:19:38.247 "state": "completed", 00:19:38.247 "digest": "sha256", 00:19:38.247 "dhgroup": "ffdhe8192" 00:19:38.247 } 00:19:38.247 } 00:19:38.247 ]' 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.247 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.508 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.508 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.508 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.508 11:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.450 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.712 00:19:39.712 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.712 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.712 11:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.973 { 00:19:39.973 "cntlid": 49, 00:19:39.973 "qid": 0, 00:19:39.973 "state": "enabled", 00:19:39.973 "listen_address": { 00:19:39.973 "trtype": "TCP", 00:19:39.973 "adrfam": "IPv4", 00:19:39.973 "traddr": "10.0.0.2", 00:19:39.973 "trsvcid": "4420" 00:19:39.973 }, 00:19:39.973 "peer_address": { 00:19:39.973 "trtype": "TCP", 00:19:39.973 "adrfam": "IPv4", 00:19:39.973 "traddr": "10.0.0.1", 00:19:39.973 "trsvcid": "35122" 00:19:39.973 }, 00:19:39.973 "auth": { 00:19:39.973 "state": "completed", 00:19:39.973 "digest": "sha384", 00:19:39.973 "dhgroup": "null" 00:19:39.973 } 00:19:39.973 } 00:19:39.973 ]' 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.973 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.234 11:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:40.806 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.074 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.335 00:19:41.335 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.335 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.335 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.595 { 00:19:41.595 "cntlid": 51, 00:19:41.595 "qid": 0, 00:19:41.595 "state": "enabled", 00:19:41.595 "listen_address": { 00:19:41.595 "trtype": "TCP", 00:19:41.595 "adrfam": "IPv4", 00:19:41.595 "traddr": "10.0.0.2", 00:19:41.595 "trsvcid": "4420" 00:19:41.595 }, 00:19:41.595 "peer_address": { 00:19:41.595 "trtype": "TCP", 00:19:41.595 "adrfam": "IPv4", 00:19:41.595 "traddr": "10.0.0.1", 00:19:41.595 "trsvcid": "35148" 00:19:41.595 }, 00:19:41.595 "auth": { 00:19:41.595 "state": "completed", 00:19:41.595 "digest": "sha384", 00:19:41.595 "dhgroup": "null" 00:19:41.595 } 00:19:41.595 } 00:19:41.595 ]' 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.595 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.968 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.968 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.968 11:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.968 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.538 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.797 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:42.797 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.798 11:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.057 00:19:43.057 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.057 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.057 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.317 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.317 { 00:19:43.317 "cntlid": 53, 00:19:43.317 "qid": 0, 00:19:43.317 "state": "enabled", 00:19:43.317 "listen_address": { 00:19:43.318 "trtype": "TCP", 00:19:43.318 "adrfam": "IPv4", 00:19:43.318 "traddr": "10.0.0.2", 00:19:43.318 "trsvcid": "4420" 00:19:43.318 }, 00:19:43.318 "peer_address": { 00:19:43.318 "trtype": "TCP", 00:19:43.318 "adrfam": "IPv4", 00:19:43.318 "traddr": "10.0.0.1", 00:19:43.318 "trsvcid": "35182" 00:19:43.318 }, 00:19:43.318 "auth": { 00:19:43.318 "state": "completed", 00:19:43.318 "digest": "sha384", 00:19:43.318 "dhgroup": "null" 00:19:43.318 } 00:19:43.318 } 00:19:43.318 ]' 00:19:43.318 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.318 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.318 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.318 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:43.318 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.578 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.578 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.578 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.578 11:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.518 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.778 00:19:44.778 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.778 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.778 11:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.038 { 00:19:45.038 "cntlid": 55, 00:19:45.038 "qid": 0, 00:19:45.038 "state": "enabled", 00:19:45.038 "listen_address": { 00:19:45.038 "trtype": "TCP", 00:19:45.038 "adrfam": "IPv4", 00:19:45.038 "traddr": "10.0.0.2", 00:19:45.038 "trsvcid": "4420" 00:19:45.038 }, 00:19:45.038 "peer_address": { 00:19:45.038 "trtype": "TCP", 00:19:45.038 "adrfam": "IPv4", 00:19:45.038 "traddr": "10.0.0.1", 00:19:45.038 "trsvcid": "54724" 00:19:45.038 }, 00:19:45.038 "auth": { 00:19:45.038 "state": "completed", 00:19:45.038 "digest": "sha384", 00:19:45.038 "dhgroup": "null" 00:19:45.038 } 00:19:45.038 } 00:19:45.038 ]' 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.038 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.298 11:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.238 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.497 00:19:46.497 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.497 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.497 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.758 { 00:19:46.758 "cntlid": 57, 00:19:46.758 "qid": 0, 00:19:46.758 "state": "enabled", 00:19:46.758 "listen_address": { 00:19:46.758 "trtype": "TCP", 00:19:46.758 "adrfam": "IPv4", 00:19:46.758 "traddr": "10.0.0.2", 00:19:46.758 "trsvcid": "4420" 00:19:46.758 }, 00:19:46.758 "peer_address": { 00:19:46.758 "trtype": "TCP", 00:19:46.758 "adrfam": "IPv4", 00:19:46.758 "traddr": "10.0.0.1", 00:19:46.758 "trsvcid": "54762" 00:19:46.758 }, 00:19:46.758 "auth": { 00:19:46.758 "state": "completed", 00:19:46.758 "digest": "sha384", 00:19:46.758 "dhgroup": "ffdhe2048" 00:19:46.758 } 00:19:46.758 } 00:19:46.758 ]' 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.758 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.018 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.018 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.018 11:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.018 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.957 11:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.957 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.958 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.217 00:19:48.217 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.217 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.217 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.477 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.477 { 00:19:48.477 "cntlid": 59, 00:19:48.477 "qid": 0, 00:19:48.477 "state": "enabled", 00:19:48.477 "listen_address": { 00:19:48.477 "trtype": "TCP", 00:19:48.477 "adrfam": "IPv4", 00:19:48.477 "traddr": "10.0.0.2", 00:19:48.477 "trsvcid": "4420" 00:19:48.477 }, 00:19:48.477 "peer_address": { 00:19:48.477 "trtype": "TCP", 00:19:48.477 "adrfam": "IPv4", 00:19:48.477 "traddr": "10.0.0.1", 00:19:48.477 "trsvcid": "54792" 00:19:48.477 }, 00:19:48.477 "auth": { 00:19:48.477 "state": "completed", 00:19:48.478 "digest": "sha384", 00:19:48.478 "dhgroup": "ffdhe2048" 00:19:48.478 } 00:19:48.478 } 00:19:48.478 ]' 00:19:48.478 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.478 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.478 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.478 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.478 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.738 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.738 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.738 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.738 11:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.679 11:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.939 00:19:49.939 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.939 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.939 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.199 { 00:19:50.199 "cntlid": 61, 00:19:50.199 "qid": 0, 00:19:50.199 "state": "enabled", 00:19:50.199 "listen_address": { 00:19:50.199 "trtype": "TCP", 00:19:50.199 "adrfam": "IPv4", 00:19:50.199 "traddr": "10.0.0.2", 00:19:50.199 "trsvcid": "4420" 00:19:50.199 }, 00:19:50.199 "peer_address": { 00:19:50.199 "trtype": "TCP", 00:19:50.199 "adrfam": "IPv4", 00:19:50.199 "traddr": "10.0.0.1", 00:19:50.199 "trsvcid": "54814" 00:19:50.199 }, 00:19:50.199 "auth": { 00:19:50.199 "state": "completed", 00:19:50.199 "digest": "sha384", 00:19:50.199 "dhgroup": "ffdhe2048" 00:19:50.199 } 00:19:50.199 } 00:19:50.199 ]' 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.199 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.496 11:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.067 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.328 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.329 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.589 00:19:51.589 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.589 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.589 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.849 { 00:19:51.849 "cntlid": 63, 00:19:51.849 "qid": 0, 00:19:51.849 "state": "enabled", 00:19:51.849 "listen_address": { 00:19:51.849 "trtype": "TCP", 00:19:51.849 "adrfam": "IPv4", 00:19:51.849 "traddr": "10.0.0.2", 00:19:51.849 "trsvcid": "4420" 00:19:51.849 }, 00:19:51.849 "peer_address": { 00:19:51.849 "trtype": "TCP", 00:19:51.849 "adrfam": "IPv4", 00:19:51.849 "traddr": "10.0.0.1", 00:19:51.849 "trsvcid": "54844" 00:19:51.849 }, 00:19:51.849 "auth": { 00:19:51.849 "state": "completed", 00:19:51.849 "digest": "sha384", 00:19:51.849 "dhgroup": "ffdhe2048" 00:19:51.849 } 00:19:51.849 } 00:19:51.849 ]' 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.849 11:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.849 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.849 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.849 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.849 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.849 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.109 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.051 11:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.051 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.052 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.312 00:19:53.312 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.312 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.312 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.572 { 00:19:53.572 "cntlid": 65, 00:19:53.572 "qid": 0, 00:19:53.572 "state": "enabled", 00:19:53.572 "listen_address": { 00:19:53.572 "trtype": "TCP", 00:19:53.572 "adrfam": "IPv4", 00:19:53.572 "traddr": "10.0.0.2", 00:19:53.572 "trsvcid": "4420" 00:19:53.572 }, 00:19:53.572 "peer_address": { 00:19:53.572 "trtype": "TCP", 00:19:53.572 "adrfam": "IPv4", 00:19:53.572 "traddr": "10.0.0.1", 00:19:53.572 "trsvcid": "57884" 00:19:53.572 }, 00:19:53.572 "auth": { 00:19:53.572 "state": "completed", 00:19:53.572 "digest": "sha384", 00:19:53.572 "dhgroup": "ffdhe3072" 00:19:53.572 } 00:19:53.572 } 00:19:53.572 ]' 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.572 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.833 11:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:19:54.404 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.404 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:54.404 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.404 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.665 11:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.926 00:19:54.926 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.926 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.926 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.185 { 00:19:55.185 "cntlid": 67, 00:19:55.185 "qid": 0, 00:19:55.185 "state": "enabled", 00:19:55.185 "listen_address": { 00:19:55.185 "trtype": "TCP", 00:19:55.185 "adrfam": "IPv4", 00:19:55.185 "traddr": "10.0.0.2", 00:19:55.185 "trsvcid": "4420" 00:19:55.185 }, 00:19:55.185 "peer_address": { 00:19:55.185 "trtype": "TCP", 00:19:55.185 "adrfam": "IPv4", 00:19:55.185 "traddr": "10.0.0.1", 00:19:55.185 "trsvcid": "57910" 00:19:55.185 }, 00:19:55.185 "auth": { 00:19:55.185 "state": "completed", 00:19:55.185 "digest": "sha384", 00:19:55.185 "dhgroup": "ffdhe3072" 00:19:55.185 } 00:19:55.185 } 00:19:55.185 ]' 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.185 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.445 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.445 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.445 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.445 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.445 11:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.386 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.647 00:19:56.647 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.647 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.647 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.907 { 00:19:56.907 "cntlid": 69, 00:19:56.907 "qid": 0, 00:19:56.907 "state": "enabled", 00:19:56.907 "listen_address": { 00:19:56.907 "trtype": "TCP", 00:19:56.907 "adrfam": "IPv4", 00:19:56.907 "traddr": "10.0.0.2", 00:19:56.907 "trsvcid": "4420" 00:19:56.907 }, 00:19:56.907 "peer_address": { 00:19:56.907 "trtype": "TCP", 00:19:56.907 "adrfam": "IPv4", 00:19:56.907 "traddr": "10.0.0.1", 00:19:56.907 "trsvcid": "57946" 00:19:56.907 }, 00:19:56.907 "auth": { 00:19:56.907 "state": "completed", 00:19:56.907 "digest": "sha384", 00:19:56.907 "dhgroup": "ffdhe3072" 00:19:56.907 } 00:19:56.907 } 00:19:56.907 ]' 00:19:56.907 11:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.907 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.166 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.105 11:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.105 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.365 00:19:58.365 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.365 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.365 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.625 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.625 { 00:19:58.625 "cntlid": 71, 00:19:58.625 "qid": 0, 00:19:58.625 "state": "enabled", 00:19:58.625 "listen_address": { 00:19:58.625 "trtype": "TCP", 00:19:58.625 "adrfam": "IPv4", 00:19:58.625 "traddr": "10.0.0.2", 00:19:58.625 "trsvcid": "4420" 00:19:58.625 }, 00:19:58.625 "peer_address": { 00:19:58.625 "trtype": "TCP", 00:19:58.625 "adrfam": "IPv4", 00:19:58.625 "traddr": "10.0.0.1", 00:19:58.625 "trsvcid": "57968" 00:19:58.625 }, 00:19:58.625 "auth": { 00:19:58.625 "state": "completed", 00:19:58.625 "digest": "sha384", 00:19:58.625 "dhgroup": "ffdhe3072" 00:19:58.625 } 00:19:58.626 } 00:19:58.626 ]' 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.626 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.887 11:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.458 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.718 11:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.978 00:19:59.978 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.978 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.978 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.238 { 00:20:00.238 "cntlid": 73, 00:20:00.238 "qid": 0, 00:20:00.238 "state": "enabled", 00:20:00.238 "listen_address": { 00:20:00.238 "trtype": "TCP", 00:20:00.238 "adrfam": "IPv4", 00:20:00.238 "traddr": "10.0.0.2", 00:20:00.238 "trsvcid": "4420" 00:20:00.238 }, 00:20:00.238 "peer_address": { 00:20:00.238 "trtype": "TCP", 00:20:00.238 "adrfam": "IPv4", 00:20:00.238 "traddr": "10.0.0.1", 00:20:00.238 "trsvcid": "58004" 00:20:00.238 }, 00:20:00.238 "auth": { 00:20:00.238 "state": "completed", 00:20:00.238 "digest": "sha384", 00:20:00.238 "dhgroup": "ffdhe4096" 00:20:00.238 } 00:20:00.238 } 00:20:00.238 ]' 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.238 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.498 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.498 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.498 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.498 11:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:01.440 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.440 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:01.440 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.440 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.441 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.702 00:20:01.702 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.702 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.702 11:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.965 { 00:20:01.965 "cntlid": 75, 00:20:01.965 "qid": 0, 00:20:01.965 "state": "enabled", 00:20:01.965 "listen_address": { 00:20:01.965 "trtype": "TCP", 00:20:01.965 "adrfam": "IPv4", 00:20:01.965 "traddr": "10.0.0.2", 00:20:01.965 "trsvcid": "4420" 00:20:01.965 }, 00:20:01.965 "peer_address": { 00:20:01.965 "trtype": "TCP", 00:20:01.965 "adrfam": "IPv4", 00:20:01.965 "traddr": "10.0.0.1", 00:20:01.965 "trsvcid": "58030" 00:20:01.965 }, 00:20:01.965 "auth": { 00:20:01.965 "state": "completed", 00:20:01.965 "digest": "sha384", 00:20:01.965 "dhgroup": "ffdhe4096" 00:20:01.965 } 00:20:01.965 } 00:20:01.965 ]' 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.965 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.227 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.227 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.227 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.227 11:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.167 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.428 00:20:03.428 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.428 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.428 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.689 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.690 { 00:20:03.690 "cntlid": 77, 00:20:03.690 "qid": 0, 00:20:03.690 "state": "enabled", 00:20:03.690 "listen_address": { 00:20:03.690 "trtype": "TCP", 00:20:03.690 "adrfam": "IPv4", 00:20:03.690 "traddr": "10.0.0.2", 00:20:03.690 "trsvcid": "4420" 00:20:03.690 }, 00:20:03.690 "peer_address": { 00:20:03.690 "trtype": "TCP", 00:20:03.690 "adrfam": "IPv4", 00:20:03.690 "traddr": "10.0.0.1", 00:20:03.690 "trsvcid": "58338" 00:20:03.690 }, 00:20:03.690 "auth": { 00:20:03.690 "state": "completed", 00:20:03.690 "digest": "sha384", 00:20:03.690 "dhgroup": "ffdhe4096" 00:20:03.690 } 00:20:03.690 } 00:20:03.690 ]' 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.690 11:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.951 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.523 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.830 11:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.115 00:20:05.115 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.115 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.115 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.374 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.374 { 00:20:05.374 "cntlid": 79, 00:20:05.374 "qid": 0, 00:20:05.374 "state": "enabled", 00:20:05.374 "listen_address": { 00:20:05.374 "trtype": "TCP", 00:20:05.374 "adrfam": "IPv4", 00:20:05.374 "traddr": "10.0.0.2", 00:20:05.374 "trsvcid": "4420" 00:20:05.374 }, 00:20:05.374 "peer_address": { 00:20:05.374 "trtype": "TCP", 00:20:05.374 "adrfam": "IPv4", 00:20:05.374 "traddr": "10.0.0.1", 00:20:05.374 "trsvcid": "58362" 00:20:05.374 }, 00:20:05.374 "auth": { 00:20:05.374 "state": "completed", 00:20:05.375 "digest": "sha384", 00:20:05.375 "dhgroup": "ffdhe4096" 00:20:05.375 } 00:20:05.375 } 00:20:05.375 ]' 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.375 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.634 11:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.205 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.465 11:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.037 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.037 { 00:20:07.037 "cntlid": 81, 00:20:07.037 "qid": 0, 00:20:07.037 "state": "enabled", 00:20:07.037 "listen_address": { 00:20:07.037 "trtype": "TCP", 00:20:07.037 "adrfam": "IPv4", 00:20:07.037 "traddr": "10.0.0.2", 00:20:07.037 "trsvcid": "4420" 00:20:07.037 }, 00:20:07.037 "peer_address": { 00:20:07.037 "trtype": "TCP", 00:20:07.037 "adrfam": "IPv4", 00:20:07.037 "traddr": "10.0.0.1", 00:20:07.037 "trsvcid": "58396" 00:20:07.037 }, 00:20:07.037 "auth": { 00:20:07.037 "state": "completed", 00:20:07.037 "digest": "sha384", 00:20:07.037 "dhgroup": "ffdhe6144" 00:20:07.037 } 00:20:07.037 } 00:20:07.037 ]' 00:20:07.037 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.300 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.561 11:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.187 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.759 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.759 { 00:20:08.759 "cntlid": 83, 00:20:08.759 "qid": 0, 00:20:08.759 "state": "enabled", 00:20:08.759 "listen_address": { 00:20:08.759 "trtype": "TCP", 00:20:08.759 "adrfam": "IPv4", 00:20:08.759 "traddr": "10.0.0.2", 00:20:08.759 "trsvcid": "4420" 00:20:08.759 }, 00:20:08.759 "peer_address": { 00:20:08.759 "trtype": "TCP", 00:20:08.759 "adrfam": "IPv4", 00:20:08.759 "traddr": "10.0.0.1", 00:20:08.759 "trsvcid": "58426" 00:20:08.759 }, 00:20:08.759 "auth": { 00:20:08.759 "state": "completed", 00:20:08.759 "digest": "sha384", 00:20:08.759 "dhgroup": "ffdhe6144" 00:20:08.759 } 00:20:08.759 } 00:20:08.759 ]' 00:20:08.759 11:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.021 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.281 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.853 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.853 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.423 00:20:10.423 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.423 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.423 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.684 { 00:20:10.684 "cntlid": 85, 00:20:10.684 "qid": 0, 00:20:10.684 "state": "enabled", 00:20:10.684 "listen_address": { 00:20:10.684 "trtype": "TCP", 00:20:10.684 "adrfam": "IPv4", 00:20:10.684 "traddr": "10.0.0.2", 00:20:10.684 "trsvcid": "4420" 00:20:10.684 }, 00:20:10.684 "peer_address": { 00:20:10.684 "trtype": "TCP", 00:20:10.684 "adrfam": "IPv4", 00:20:10.684 "traddr": "10.0.0.1", 00:20:10.684 "trsvcid": "58442" 00:20:10.684 }, 00:20:10.684 "auth": { 00:20:10.684 "state": "completed", 00:20:10.684 "digest": "sha384", 00:20:10.684 "dhgroup": "ffdhe6144" 00:20:10.684 } 00:20:10.684 } 00:20:10.684 ]' 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.684 11:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.945 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.514 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.774 11:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.344 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.345 { 00:20:12.345 "cntlid": 87, 00:20:12.345 "qid": 0, 00:20:12.345 "state": "enabled", 00:20:12.345 "listen_address": { 00:20:12.345 "trtype": "TCP", 00:20:12.345 "adrfam": "IPv4", 00:20:12.345 "traddr": "10.0.0.2", 00:20:12.345 "trsvcid": "4420" 00:20:12.345 }, 00:20:12.345 "peer_address": { 00:20:12.345 "trtype": "TCP", 00:20:12.345 "adrfam": "IPv4", 00:20:12.345 "traddr": "10.0.0.1", 00:20:12.345 "trsvcid": "58472" 00:20:12.345 }, 00:20:12.345 "auth": { 00:20:12.345 "state": "completed", 00:20:12.345 "digest": "sha384", 00:20:12.345 "dhgroup": "ffdhe6144" 00:20:12.345 } 00:20:12.345 } 00:20:12.345 ]' 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.345 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.606 11:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.546 11:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.547 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.547 11:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.116 00:20:14.116 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.116 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.116 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.376 { 00:20:14.376 "cntlid": 89, 00:20:14.376 "qid": 0, 00:20:14.376 "state": "enabled", 00:20:14.376 "listen_address": { 00:20:14.376 "trtype": "TCP", 00:20:14.376 "adrfam": "IPv4", 00:20:14.376 "traddr": "10.0.0.2", 00:20:14.376 "trsvcid": "4420" 00:20:14.376 }, 00:20:14.376 "peer_address": { 00:20:14.376 "trtype": "TCP", 00:20:14.376 "adrfam": "IPv4", 00:20:14.376 "traddr": "10.0.0.1", 00:20:14.376 "trsvcid": "37716" 00:20:14.376 }, 00:20:14.376 "auth": { 00:20:14.376 "state": "completed", 00:20:14.376 "digest": "sha384", 00:20:14.376 "dhgroup": "ffdhe8192" 00:20:14.376 } 00:20:14.376 } 00:20:14.376 ]' 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.376 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.636 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.636 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.636 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.636 11:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.205 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.465 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.034 00:20:16.034 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.034 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.034 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.294 { 00:20:16.294 "cntlid": 91, 00:20:16.294 "qid": 0, 00:20:16.294 "state": "enabled", 00:20:16.294 "listen_address": { 00:20:16.294 "trtype": "TCP", 00:20:16.294 "adrfam": "IPv4", 00:20:16.294 "traddr": "10.0.0.2", 00:20:16.294 "trsvcid": "4420" 00:20:16.294 }, 00:20:16.294 "peer_address": { 00:20:16.294 "trtype": "TCP", 00:20:16.294 "adrfam": "IPv4", 00:20:16.294 "traddr": "10.0.0.1", 00:20:16.294 "trsvcid": "37736" 00:20:16.294 }, 00:20:16.294 "auth": { 00:20:16.294 "state": "completed", 00:20:16.294 "digest": "sha384", 00:20:16.294 "dhgroup": "ffdhe8192" 00:20:16.294 } 00:20:16.294 } 00:20:16.294 ]' 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.294 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.553 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.553 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.553 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.553 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.493 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.063 00:20:18.063 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.063 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.063 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.323 { 00:20:18.323 "cntlid": 93, 00:20:18.323 "qid": 0, 00:20:18.323 "state": "enabled", 00:20:18.323 "listen_address": { 00:20:18.323 "trtype": "TCP", 00:20:18.323 "adrfam": "IPv4", 00:20:18.323 "traddr": "10.0.0.2", 00:20:18.323 "trsvcid": "4420" 00:20:18.323 }, 00:20:18.323 "peer_address": { 00:20:18.323 "trtype": "TCP", 00:20:18.323 "adrfam": "IPv4", 00:20:18.323 "traddr": "10.0.0.1", 00:20:18.323 "trsvcid": "37760" 00:20:18.323 }, 00:20:18.323 "auth": { 00:20:18.323 "state": "completed", 00:20:18.323 "digest": "sha384", 00:20:18.323 "dhgroup": "ffdhe8192" 00:20:18.323 } 00:20:18.323 } 00:20:18.323 ]' 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.323 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.583 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.152 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.411 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.412 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.980 00:20:19.981 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.981 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.981 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.240 { 00:20:20.240 "cntlid": 95, 00:20:20.240 "qid": 0, 00:20:20.240 "state": "enabled", 00:20:20.240 "listen_address": { 00:20:20.240 "trtype": "TCP", 00:20:20.240 "adrfam": "IPv4", 00:20:20.240 "traddr": "10.0.0.2", 00:20:20.240 "trsvcid": "4420" 00:20:20.240 }, 00:20:20.240 "peer_address": { 00:20:20.240 "trtype": "TCP", 00:20:20.240 "adrfam": "IPv4", 00:20:20.240 "traddr": "10.0.0.1", 00:20:20.240 "trsvcid": "37792" 00:20:20.240 }, 00:20:20.240 "auth": { 00:20:20.240 "state": "completed", 00:20:20.240 "digest": "sha384", 00:20:20.240 "dhgroup": "ffdhe8192" 00:20:20.240 } 00:20:20.240 } 00:20:20.240 ]' 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.240 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.500 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.442 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.703 00:20:21.703 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.703 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.703 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.963 { 00:20:21.963 "cntlid": 97, 00:20:21.963 "qid": 0, 00:20:21.963 "state": "enabled", 00:20:21.963 "listen_address": { 00:20:21.963 "trtype": "TCP", 00:20:21.963 "adrfam": "IPv4", 00:20:21.963 "traddr": "10.0.0.2", 00:20:21.963 "trsvcid": "4420" 00:20:21.963 }, 00:20:21.963 "peer_address": { 00:20:21.963 "trtype": "TCP", 00:20:21.963 "adrfam": "IPv4", 00:20:21.963 "traddr": "10.0.0.1", 00:20:21.963 "trsvcid": "37820" 00:20:21.963 }, 00:20:21.963 "auth": { 00:20:21.963 "state": "completed", 00:20:21.963 "digest": "sha512", 00:20:21.963 "dhgroup": "null" 00:20:21.963 } 00:20:21.963 } 00:20:21.963 ]' 00:20:21.963 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.963 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.225 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.795 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.056 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.316 00:20:23.316 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.316 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.316 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.577 { 00:20:23.577 "cntlid": 99, 00:20:23.577 "qid": 0, 00:20:23.577 "state": "enabled", 00:20:23.577 "listen_address": { 00:20:23.577 "trtype": "TCP", 00:20:23.577 "adrfam": "IPv4", 00:20:23.577 "traddr": "10.0.0.2", 00:20:23.577 "trsvcid": "4420" 00:20:23.577 }, 00:20:23.577 "peer_address": { 00:20:23.577 "trtype": "TCP", 00:20:23.577 "adrfam": "IPv4", 00:20:23.577 "traddr": "10.0.0.1", 00:20:23.577 "trsvcid": "39378" 00:20:23.577 }, 00:20:23.577 "auth": { 00:20:23.577 "state": "completed", 00:20:23.577 "digest": "sha512", 00:20:23.577 "dhgroup": "null" 00:20:23.577 } 00:20:23.577 } 00:20:23.577 ]' 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.577 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.838 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.408 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.668 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.929 00:20:24.929 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.929 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.929 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.189 { 00:20:25.189 "cntlid": 101, 00:20:25.189 "qid": 0, 00:20:25.189 "state": "enabled", 00:20:25.189 "listen_address": { 00:20:25.189 "trtype": "TCP", 00:20:25.189 "adrfam": "IPv4", 00:20:25.189 "traddr": "10.0.0.2", 00:20:25.189 "trsvcid": "4420" 00:20:25.189 }, 00:20:25.189 "peer_address": { 00:20:25.189 "trtype": "TCP", 00:20:25.189 "adrfam": "IPv4", 00:20:25.189 "traddr": "10.0.0.1", 00:20:25.189 "trsvcid": "39410" 00:20:25.189 }, 00:20:25.189 "auth": { 00:20:25.189 "state": "completed", 00:20:25.189 "digest": "sha512", 00:20:25.189 "dhgroup": "null" 00:20:25.189 } 00:20:25.189 } 00:20:25.189 ]' 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.189 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.448 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.448 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.448 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.448 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.017 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.302 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.596 00:20:26.596 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.596 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.596 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.866 { 00:20:26.866 "cntlid": 103, 00:20:26.866 "qid": 0, 00:20:26.866 "state": "enabled", 00:20:26.866 "listen_address": { 00:20:26.866 "trtype": "TCP", 00:20:26.866 "adrfam": "IPv4", 00:20:26.866 "traddr": "10.0.0.2", 00:20:26.866 "trsvcid": "4420" 00:20:26.866 }, 00:20:26.866 "peer_address": { 00:20:26.866 "trtype": "TCP", 00:20:26.866 "adrfam": "IPv4", 00:20:26.866 "traddr": "10.0.0.1", 00:20:26.866 "trsvcid": "39452" 00:20:26.866 }, 00:20:26.866 "auth": { 00:20:26.866 "state": "completed", 00:20:26.866 "digest": "sha512", 00:20:26.866 "dhgroup": "null" 00:20:26.866 } 00:20:26.866 } 00:20:26.866 ]' 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.866 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.866 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.866 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.866 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.126 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.695 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.955 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.216 00:20:28.216 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.216 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.216 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.475 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.475 { 00:20:28.475 "cntlid": 105, 00:20:28.475 "qid": 0, 00:20:28.475 "state": "enabled", 00:20:28.475 "listen_address": { 00:20:28.475 "trtype": "TCP", 00:20:28.475 "adrfam": "IPv4", 00:20:28.475 "traddr": "10.0.0.2", 00:20:28.475 "trsvcid": "4420" 00:20:28.475 }, 00:20:28.475 "peer_address": { 00:20:28.475 "trtype": "TCP", 00:20:28.475 "adrfam": "IPv4", 00:20:28.475 "traddr": "10.0.0.1", 00:20:28.475 "trsvcid": "39482" 00:20:28.475 }, 00:20:28.475 "auth": { 00:20:28.475 "state": "completed", 00:20:28.476 "digest": "sha512", 00:20:28.476 "dhgroup": "ffdhe2048" 00:20:28.476 } 00:20:28.476 } 00:20:28.476 ]' 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.476 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.735 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.674 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.934 00:20:29.934 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.934 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.934 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.193 { 00:20:30.193 "cntlid": 107, 00:20:30.193 "qid": 0, 00:20:30.193 "state": "enabled", 00:20:30.193 "listen_address": { 00:20:30.193 "trtype": "TCP", 00:20:30.193 "adrfam": "IPv4", 00:20:30.193 "traddr": "10.0.0.2", 00:20:30.193 "trsvcid": "4420" 00:20:30.193 }, 00:20:30.193 "peer_address": { 00:20:30.193 "trtype": "TCP", 00:20:30.193 "adrfam": "IPv4", 00:20:30.193 "traddr": "10.0.0.1", 00:20:30.193 "trsvcid": "39514" 00:20:30.193 }, 00:20:30.193 "auth": { 00:20:30.193 "state": "completed", 00:20:30.193 "digest": "sha512", 00:20:30.193 "dhgroup": "ffdhe2048" 00:20:30.193 } 00:20:30.193 } 00:20:30.193 ]' 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.193 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.453 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.022 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.281 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.282 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.282 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.282 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.542 00:20:31.542 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.542 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.542 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.802 { 00:20:31.802 "cntlid": 109, 00:20:31.802 "qid": 0, 00:20:31.802 "state": "enabled", 00:20:31.802 "listen_address": { 00:20:31.802 "trtype": "TCP", 00:20:31.802 "adrfam": "IPv4", 00:20:31.802 "traddr": "10.0.0.2", 00:20:31.802 "trsvcid": "4420" 00:20:31.802 }, 00:20:31.802 "peer_address": { 00:20:31.802 "trtype": "TCP", 00:20:31.802 "adrfam": "IPv4", 00:20:31.802 "traddr": "10.0.0.1", 00:20:31.802 "trsvcid": "39548" 00:20:31.802 }, 00:20:31.802 "auth": { 00:20:31.802 "state": "completed", 00:20:31.802 "digest": "sha512", 00:20:31.802 "dhgroup": "ffdhe2048" 00:20:31.802 } 00:20:31.802 } 00:20:31.802 ]' 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.802 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.802 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.802 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.062 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.062 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.062 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.062 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.002 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.002 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.262 00:20:33.262 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.262 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.262 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.522 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.522 { 00:20:33.522 "cntlid": 111, 00:20:33.522 "qid": 0, 00:20:33.522 "state": "enabled", 00:20:33.522 "listen_address": { 00:20:33.522 "trtype": "TCP", 00:20:33.522 "adrfam": "IPv4", 00:20:33.522 "traddr": "10.0.0.2", 00:20:33.522 "trsvcid": "4420" 00:20:33.522 }, 00:20:33.522 "peer_address": { 00:20:33.522 "trtype": "TCP", 00:20:33.522 "adrfam": "IPv4", 00:20:33.522 "traddr": "10.0.0.1", 00:20:33.522 "trsvcid": "44934" 00:20:33.522 }, 00:20:33.522 "auth": { 00:20:33.523 "state": "completed", 00:20:33.523 "digest": "sha512", 00:20:33.523 "dhgroup": "ffdhe2048" 00:20:33.523 } 00:20:33.523 } 00:20:33.523 ]' 00:20:33.523 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.523 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.523 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.523 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.523 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.783 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.784 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.784 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.784 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.355 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.615 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.875 00:20:34.875 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.875 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.875 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.134 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.134 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.134 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.134 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.134 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.135 { 00:20:35.135 "cntlid": 113, 00:20:35.135 "qid": 0, 00:20:35.135 "state": "enabled", 00:20:35.135 "listen_address": { 00:20:35.135 "trtype": "TCP", 00:20:35.135 "adrfam": "IPv4", 00:20:35.135 "traddr": "10.0.0.2", 00:20:35.135 "trsvcid": "4420" 00:20:35.135 }, 00:20:35.135 "peer_address": { 00:20:35.135 "trtype": "TCP", 00:20:35.135 "adrfam": "IPv4", 00:20:35.135 "traddr": "10.0.0.1", 00:20:35.135 "trsvcid": "44978" 00:20:35.135 }, 00:20:35.135 "auth": { 00:20:35.135 "state": "completed", 00:20:35.135 "digest": "sha512", 00:20:35.135 "dhgroup": "ffdhe3072" 00:20:35.135 } 00:20:35.135 } 00:20:35.135 ]' 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.135 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.394 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.394 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.394 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.395 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:35.966 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.227 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.488 00:20:36.488 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.488 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.488 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.747 { 00:20:36.747 "cntlid": 115, 00:20:36.747 "qid": 0, 00:20:36.747 "state": "enabled", 00:20:36.747 "listen_address": { 00:20:36.747 "trtype": "TCP", 00:20:36.747 "adrfam": "IPv4", 00:20:36.747 "traddr": "10.0.0.2", 00:20:36.747 "trsvcid": "4420" 00:20:36.747 }, 00:20:36.747 "peer_address": { 00:20:36.747 "trtype": "TCP", 00:20:36.747 "adrfam": "IPv4", 00:20:36.747 "traddr": "10.0.0.1", 00:20:36.747 "trsvcid": "45016" 00:20:36.747 }, 00:20:36.747 "auth": { 00:20:36.747 "state": "completed", 00:20:36.747 "digest": "sha512", 00:20:36.747 "dhgroup": "ffdhe3072" 00:20:36.747 } 00:20:36.747 } 00:20:36.747 ]' 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.747 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.007 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.947 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.947 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.948 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.948 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.948 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.207 00:20:38.207 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.207 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.207 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.467 { 00:20:38.467 "cntlid": 117, 00:20:38.467 "qid": 0, 00:20:38.467 "state": "enabled", 00:20:38.467 "listen_address": { 00:20:38.467 "trtype": "TCP", 00:20:38.467 "adrfam": "IPv4", 00:20:38.467 "traddr": "10.0.0.2", 00:20:38.467 "trsvcid": "4420" 00:20:38.467 }, 00:20:38.467 "peer_address": { 00:20:38.467 "trtype": "TCP", 00:20:38.467 "adrfam": "IPv4", 00:20:38.467 "traddr": "10.0.0.1", 00:20:38.467 "trsvcid": "45052" 00:20:38.467 }, 00:20:38.467 "auth": { 00:20:38.467 "state": "completed", 00:20:38.467 "digest": "sha512", 00:20:38.467 "dhgroup": "ffdhe3072" 00:20:38.467 } 00:20:38.467 } 00:20:38.467 ]' 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.467 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.727 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:39.298 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.558 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.818 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.078 { 00:20:40.078 "cntlid": 119, 00:20:40.078 "qid": 0, 00:20:40.078 "state": "enabled", 00:20:40.078 "listen_address": { 00:20:40.078 "trtype": "TCP", 00:20:40.078 "adrfam": "IPv4", 00:20:40.078 "traddr": "10.0.0.2", 00:20:40.078 "trsvcid": "4420" 00:20:40.078 }, 00:20:40.078 "peer_address": { 00:20:40.078 "trtype": "TCP", 00:20:40.078 "adrfam": "IPv4", 00:20:40.078 "traddr": "10.0.0.1", 00:20:40.078 "trsvcid": "45076" 00:20:40.078 }, 00:20:40.078 "auth": { 00:20:40.078 "state": "completed", 00:20:40.078 "digest": "sha512", 00:20:40.078 "dhgroup": "ffdhe3072" 00:20:40.078 } 00:20:40.078 } 00:20:40.078 ]' 00:20:40.078 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.338 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.599 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.169 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.429 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.689 00:20:41.689 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.689 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.689 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.948 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.948 { 00:20:41.948 "cntlid": 121, 00:20:41.948 "qid": 0, 00:20:41.948 "state": "enabled", 00:20:41.948 "listen_address": { 00:20:41.948 "trtype": "TCP", 00:20:41.948 "adrfam": "IPv4", 00:20:41.948 "traddr": "10.0.0.2", 00:20:41.948 "trsvcid": "4420" 00:20:41.948 }, 00:20:41.948 "peer_address": { 00:20:41.948 "trtype": "TCP", 00:20:41.949 "adrfam": "IPv4", 00:20:41.949 "traddr": "10.0.0.1", 00:20:41.949 "trsvcid": "45084" 00:20:41.949 }, 00:20:41.949 "auth": { 00:20:41.949 "state": "completed", 00:20:41.949 "digest": "sha512", 00:20:41.949 "dhgroup": "ffdhe4096" 00:20:41.949 } 00:20:41.949 } 00:20:41.949 ]' 00:20:41.949 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.949 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.209 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:42.780 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.781 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.041 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.301 00:20:43.302 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.302 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.302 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.561 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.561 { 00:20:43.561 "cntlid": 123, 00:20:43.561 "qid": 0, 00:20:43.561 "state": "enabled", 00:20:43.561 "listen_address": { 00:20:43.561 "trtype": "TCP", 00:20:43.561 "adrfam": "IPv4", 00:20:43.561 "traddr": "10.0.0.2", 00:20:43.562 "trsvcid": "4420" 00:20:43.562 }, 00:20:43.562 "peer_address": { 00:20:43.562 "trtype": "TCP", 00:20:43.562 "adrfam": "IPv4", 00:20:43.562 "traddr": "10.0.0.1", 00:20:43.562 "trsvcid": "51602" 00:20:43.562 }, 00:20:43.562 "auth": { 00:20:43.562 "state": "completed", 00:20:43.562 "digest": "sha512", 00:20:43.562 "dhgroup": "ffdhe4096" 00:20:43.562 } 00:20:43.562 } 00:20:43.562 ]' 00:20:43.562 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.562 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.562 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.562 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.562 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.821 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.821 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.821 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.821 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.761 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.024 00:20:45.024 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.024 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.025 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.285 { 00:20:45.285 "cntlid": 125, 00:20:45.285 "qid": 0, 00:20:45.285 "state": "enabled", 00:20:45.285 "listen_address": { 00:20:45.285 "trtype": "TCP", 00:20:45.285 "adrfam": "IPv4", 00:20:45.285 "traddr": "10.0.0.2", 00:20:45.285 "trsvcid": "4420" 00:20:45.285 }, 00:20:45.285 "peer_address": { 00:20:45.285 "trtype": "TCP", 00:20:45.285 "adrfam": "IPv4", 00:20:45.285 "traddr": "10.0.0.1", 00:20:45.285 "trsvcid": "51622" 00:20:45.285 }, 00:20:45.285 "auth": { 00:20:45.285 "state": "completed", 00:20:45.285 "digest": "sha512", 00:20:45.285 "dhgroup": "ffdhe4096" 00:20:45.285 } 00:20:45.285 } 00:20:45.285 ]' 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.285 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.545 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.545 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.545 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.545 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.486 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.746 00:20:46.746 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.746 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.746 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.007 { 00:20:47.007 "cntlid": 127, 00:20:47.007 "qid": 0, 00:20:47.007 "state": "enabled", 00:20:47.007 "listen_address": { 00:20:47.007 "trtype": "TCP", 00:20:47.007 "adrfam": "IPv4", 00:20:47.007 "traddr": "10.0.0.2", 00:20:47.007 "trsvcid": "4420" 00:20:47.007 }, 00:20:47.007 "peer_address": { 00:20:47.007 "trtype": "TCP", 00:20:47.007 "adrfam": "IPv4", 00:20:47.007 "traddr": "10.0.0.1", 00:20:47.007 "trsvcid": "51644" 00:20:47.007 }, 00:20:47.007 "auth": { 00:20:47.007 "state": "completed", 00:20:47.007 "digest": "sha512", 00:20:47.007 "dhgroup": "ffdhe4096" 00:20:47.007 } 00:20:47.007 } 00:20:47.007 ]' 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.007 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.267 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.267 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.267 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.267 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.208 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.468 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.775 { 00:20:48.775 "cntlid": 129, 00:20:48.775 "qid": 0, 00:20:48.775 "state": "enabled", 00:20:48.775 "listen_address": { 00:20:48.775 "trtype": "TCP", 00:20:48.775 "adrfam": "IPv4", 00:20:48.775 "traddr": "10.0.0.2", 00:20:48.775 "trsvcid": "4420" 00:20:48.775 }, 00:20:48.775 "peer_address": { 00:20:48.775 "trtype": "TCP", 00:20:48.775 "adrfam": "IPv4", 00:20:48.775 "traddr": "10.0.0.1", 00:20:48.775 "trsvcid": "51670" 00:20:48.775 }, 00:20:48.775 "auth": { 00:20:48.775 "state": "completed", 00:20:48.775 "digest": "sha512", 00:20:48.775 "dhgroup": "ffdhe6144" 00:20:48.775 } 00:20:48.775 } 00:20:48.775 ]' 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.775 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.045 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.987 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.987 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.559 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.559 { 00:20:50.559 "cntlid": 131, 00:20:50.559 "qid": 0, 00:20:50.559 "state": "enabled", 00:20:50.559 "listen_address": { 00:20:50.559 "trtype": "TCP", 00:20:50.559 "adrfam": "IPv4", 00:20:50.559 "traddr": "10.0.0.2", 00:20:50.559 "trsvcid": "4420" 00:20:50.559 }, 00:20:50.559 "peer_address": { 00:20:50.559 "trtype": "TCP", 00:20:50.559 "adrfam": "IPv4", 00:20:50.559 "traddr": "10.0.0.1", 00:20:50.559 "trsvcid": "51708" 00:20:50.559 }, 00:20:50.559 "auth": { 00:20:50.559 "state": "completed", 00:20:50.559 "digest": "sha512", 00:20:50.559 "dhgroup": "ffdhe6144" 00:20:50.559 } 00:20:50.559 } 00:20:50.559 ]' 00:20:50.559 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.837 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.098 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.672 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.933 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.195 00:20:52.195 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.195 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.195 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.455 { 00:20:52.455 "cntlid": 133, 00:20:52.455 "qid": 0, 00:20:52.455 "state": "enabled", 00:20:52.455 "listen_address": { 00:20:52.455 "trtype": "TCP", 00:20:52.455 "adrfam": "IPv4", 00:20:52.455 "traddr": "10.0.0.2", 00:20:52.455 "trsvcid": "4420" 00:20:52.455 }, 00:20:52.455 "peer_address": { 00:20:52.455 "trtype": "TCP", 00:20:52.455 "adrfam": "IPv4", 00:20:52.455 "traddr": "10.0.0.1", 00:20:52.455 "trsvcid": "51750" 00:20:52.455 }, 00:20:52.455 "auth": { 00:20:52.455 "state": "completed", 00:20:52.455 "digest": "sha512", 00:20:52.455 "dhgroup": "ffdhe6144" 00:20:52.455 } 00:20:52.455 } 00:20:52.455 ]' 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.455 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.715 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.715 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.715 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.715 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.654 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.224 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.224 { 00:20:54.224 "cntlid": 135, 00:20:54.224 "qid": 0, 00:20:54.224 "state": "enabled", 00:20:54.224 "listen_address": { 00:20:54.224 "trtype": "TCP", 00:20:54.224 "adrfam": "IPv4", 00:20:54.224 "traddr": "10.0.0.2", 00:20:54.224 "trsvcid": "4420" 00:20:54.224 }, 00:20:54.224 "peer_address": { 00:20:54.224 "trtype": "TCP", 00:20:54.224 "adrfam": "IPv4", 00:20:54.224 "traddr": "10.0.0.1", 00:20:54.224 "trsvcid": "52708" 00:20:54.224 }, 00:20:54.224 "auth": { 00:20:54.224 "state": "completed", 00:20:54.224 "digest": "sha512", 00:20:54.224 "dhgroup": "ffdhe6144" 00:20:54.224 } 00:20:54.224 } 00:20:54.224 ]' 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.224 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.484 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.484 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.484 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.484 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.484 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.744 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:55.314 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:55.573 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.574 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.144 00:20:56.144 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.144 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.144 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.404 { 00:20:56.404 "cntlid": 137, 00:20:56.404 "qid": 0, 00:20:56.404 "state": "enabled", 00:20:56.404 "listen_address": { 00:20:56.404 "trtype": "TCP", 00:20:56.404 "adrfam": "IPv4", 00:20:56.404 "traddr": "10.0.0.2", 00:20:56.404 "trsvcid": "4420" 00:20:56.404 }, 00:20:56.404 "peer_address": { 00:20:56.404 "trtype": "TCP", 00:20:56.404 "adrfam": "IPv4", 00:20:56.404 "traddr": "10.0.0.1", 00:20:56.404 "trsvcid": "52722" 00:20:56.404 }, 00:20:56.404 "auth": { 00:20:56.404 "state": "completed", 00:20:56.404 "digest": "sha512", 00:20:56.404 "dhgroup": "ffdhe8192" 00:20:56.404 } 00:20:56.404 } 00:20:56.404 ]' 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.404 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.664 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.233 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.493 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.062 00:20:58.062 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.062 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.062 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.321 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.321 { 00:20:58.321 "cntlid": 139, 00:20:58.321 "qid": 0, 00:20:58.322 "state": "enabled", 00:20:58.322 "listen_address": { 00:20:58.322 "trtype": "TCP", 00:20:58.322 "adrfam": "IPv4", 00:20:58.322 "traddr": "10.0.0.2", 00:20:58.322 "trsvcid": "4420" 00:20:58.322 }, 00:20:58.322 "peer_address": { 00:20:58.322 "trtype": "TCP", 00:20:58.322 "adrfam": "IPv4", 00:20:58.322 "traddr": "10.0.0.1", 00:20:58.322 "trsvcid": "52758" 00:20:58.322 }, 00:20:58.322 "auth": { 00:20:58.322 "state": "completed", 00:20:58.322 "digest": "sha512", 00:20:58.322 "dhgroup": "ffdhe8192" 00:20:58.322 } 00:20:58.322 } 00:20:58.322 ]' 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.322 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.581 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NWEyZGI0M2NlZWU1YjdmM2UwOTAwYjliMTYxYmI1MzlU8Ze4: --dhchap-ctrl-secret DHHC-1:02:YjcyMGJhYzRkZWI0NTE2ZmQyNWYwMTA3YTM3MzhmOTA3MGE4NmVjNjc3NTAxZDc59VAlDQ==: 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.520 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.089 00:21:00.089 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.089 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.089 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.349 { 00:21:00.349 "cntlid": 141, 00:21:00.349 "qid": 0, 00:21:00.349 "state": "enabled", 00:21:00.349 "listen_address": { 00:21:00.349 "trtype": "TCP", 00:21:00.349 "adrfam": "IPv4", 00:21:00.349 "traddr": "10.0.0.2", 00:21:00.349 "trsvcid": "4420" 00:21:00.349 }, 00:21:00.349 "peer_address": { 00:21:00.349 "trtype": "TCP", 00:21:00.349 "adrfam": "IPv4", 00:21:00.349 "traddr": "10.0.0.1", 00:21:00.349 "trsvcid": "52792" 00:21:00.349 }, 00:21:00.349 "auth": { 00:21:00.349 "state": "completed", 00:21:00.349 "digest": "sha512", 00:21:00.349 "dhgroup": "ffdhe8192" 00:21:00.349 } 00:21:00.349 } 00:21:00.349 ]' 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.349 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.609 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:ZmU2ZGFkOTk3ZTRjZDRmNmVjMjBjNzM5NjMwMzY0YWFhNWE3Y2E5MmZkN2VjYWFjS5UYfg==: --dhchap-ctrl-secret DHHC-1:01:YzcwZTM4ZmU3NzljMzM4ZjhjNmVjMDJlYWZlYjI0NmOT+kij: 00:21:01.178 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.437 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.008 00:21:02.008 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.008 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.008 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.268 { 00:21:02.268 "cntlid": 143, 00:21:02.268 "qid": 0, 00:21:02.268 "state": "enabled", 00:21:02.268 "listen_address": { 00:21:02.268 "trtype": "TCP", 00:21:02.268 "adrfam": "IPv4", 00:21:02.268 "traddr": "10.0.0.2", 00:21:02.268 "trsvcid": "4420" 00:21:02.268 }, 00:21:02.268 "peer_address": { 00:21:02.268 "trtype": "TCP", 00:21:02.268 "adrfam": "IPv4", 00:21:02.268 "traddr": "10.0.0.1", 00:21:02.268 "trsvcid": "52822" 00:21:02.268 }, 00:21:02.268 "auth": { 00:21:02.268 "state": "completed", 00:21:02.268 "digest": "sha512", 00:21:02.268 "dhgroup": "ffdhe8192" 00:21:02.268 } 00:21:02.268 } 00:21:02.268 ]' 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.268 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.528 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.528 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.528 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.528 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.528 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.788 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.357 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.617 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.186 00:21:04.186 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.186 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.186 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.446 { 00:21:04.446 "cntlid": 145, 00:21:04.446 "qid": 0, 00:21:04.446 "state": "enabled", 00:21:04.446 "listen_address": { 00:21:04.446 "trtype": "TCP", 00:21:04.446 "adrfam": "IPv4", 00:21:04.446 "traddr": "10.0.0.2", 00:21:04.446 "trsvcid": "4420" 00:21:04.446 }, 00:21:04.446 "peer_address": { 00:21:04.446 "trtype": "TCP", 00:21:04.446 "adrfam": "IPv4", 00:21:04.446 "traddr": "10.0.0.1", 00:21:04.446 "trsvcid": "49200" 00:21:04.446 }, 00:21:04.446 "auth": { 00:21:04.446 "state": "completed", 00:21:04.446 "digest": "sha512", 00:21:04.446 "dhgroup": "ffdhe8192" 00:21:04.446 } 00:21:04.446 } 00:21:04.446 ]' 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.446 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.706 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:ZGZkZmI5MTk2ZDM2NzAwMzM3OGNiZTA3YWI0ZWU1MTUxNDM5OWZlMGRkNGE3N2RlkWNdFQ==: --dhchap-ctrl-secret DHHC-1:03:MDliNTEzMzgzM2NhMzYwNDg4MjA4NTY3ZjIxNWE1OWNhYmEyMGNhOTEzY2FlMzBkYmNkOTExNmEzNzBmNTUyMGkzdls=: 00:21:05.275 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.275 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:05.275 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.275 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.276 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.845 request: 00:21:05.845 { 00:21:05.845 "name": "nvme0", 00:21:05.845 "trtype": "tcp", 00:21:05.845 "traddr": "10.0.0.2", 00:21:05.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:05.845 "adrfam": "ipv4", 00:21:05.845 "trsvcid": "4420", 00:21:05.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:05.845 "dhchap_key": "key2", 00:21:05.845 "method": "bdev_nvme_attach_controller", 00:21:05.845 "req_id": 1 00:21:05.845 } 00:21:05.845 Got JSON-RPC error response 00:21:05.845 response: 00:21:05.845 { 00:21:05.845 "code": -5, 00:21:05.845 "message": "Input/output error" 00:21:05.845 } 00:21:05.845 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:05.845 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.845 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.845 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.845 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:05.845 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.416 request: 00:21:06.416 { 00:21:06.416 "name": "nvme0", 00:21:06.416 "trtype": "tcp", 00:21:06.416 "traddr": "10.0.0.2", 00:21:06.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:06.416 "adrfam": "ipv4", 00:21:06.416 "trsvcid": "4420", 00:21:06.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.416 "dhchap_key": "key1", 00:21:06.416 "dhchap_ctrlr_key": "ckey2", 00:21:06.416 "method": "bdev_nvme_attach_controller", 00:21:06.416 "req_id": 1 00:21:06.416 } 00:21:06.416 Got JSON-RPC error response 00:21:06.416 response: 00:21:06.416 { 00:21:06.416 "code": -5, 00:21:06.416 "message": "Input/output error" 00:21:06.416 } 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.416 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.986 request: 00:21:06.986 { 00:21:06.986 "name": "nvme0", 00:21:06.986 "trtype": "tcp", 00:21:06.986 "traddr": "10.0.0.2", 00:21:06.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:06.987 "adrfam": "ipv4", 00:21:06.987 "trsvcid": "4420", 00:21:06.987 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.987 "dhchap_key": "key1", 00:21:06.987 "dhchap_ctrlr_key": "ckey1", 00:21:06.987 "method": "bdev_nvme_attach_controller", 00:21:06.987 "req_id": 1 00:21:06.987 } 00:21:06.987 Got JSON-RPC error response 00:21:06.987 response: 00:21:06.987 { 00:21:06.987 "code": -5, 00:21:06.987 "message": "Input/output error" 00:21:06.987 } 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1545494 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1545494 ']' 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1545494 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1545494 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1545494' 00:21:06.987 killing process with pid 1545494 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1545494 00:21:06.987 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1545494 00:21:07.246 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:07.246 11:28:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.246 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1569393 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1569393 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1569393 ']' 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:07.247 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1569393 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1569393 ']' 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:08.186 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.447 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.017 00:21:09.017 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.017 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.017 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.276 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.276 { 00:21:09.276 "cntlid": 1, 00:21:09.276 "qid": 0, 00:21:09.276 "state": "enabled", 00:21:09.276 "listen_address": { 00:21:09.276 "trtype": "TCP", 00:21:09.276 "adrfam": "IPv4", 00:21:09.276 "traddr": "10.0.0.2", 00:21:09.276 "trsvcid": "4420" 00:21:09.276 }, 00:21:09.276 "peer_address": { 00:21:09.276 "trtype": "TCP", 00:21:09.276 "adrfam": "IPv4", 00:21:09.277 "traddr": "10.0.0.1", 00:21:09.277 "trsvcid": "49260" 00:21:09.277 }, 00:21:09.277 "auth": { 00:21:09.277 "state": "completed", 00:21:09.277 "digest": "sha512", 00:21:09.277 "dhgroup": "ffdhe8192" 00:21:09.277 } 00:21:09.277 } 00:21:09.277 ]' 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.277 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.537 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:YThkMzMyYzE2MmRhZDU5ZjQwOGFjZmFkM2Y5NWRlZGQxZjA1NGEwNGFjMGZjZjY1ZWZmOWUyYmYyMmYyY2JlY6/SuGc=: 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.478 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.738 request: 00:21:10.738 { 00:21:10.738 "name": "nvme0", 00:21:10.738 "trtype": "tcp", 00:21:10.738 "traddr": "10.0.0.2", 00:21:10.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:10.738 "adrfam": "ipv4", 00:21:10.738 "trsvcid": "4420", 00:21:10.738 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.738 "dhchap_key": "key3", 00:21:10.738 "method": "bdev_nvme_attach_controller", 00:21:10.738 "req_id": 1 00:21:10.738 } 00:21:10.738 Got JSON-RPC error response 00:21:10.738 response: 00:21:10.738 { 00:21:10.738 "code": -5, 00:21:10.738 "message": "Input/output error" 00:21:10.738 } 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:10.738 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.002 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.002 request: 00:21:11.002 { 00:21:11.002 "name": "nvme0", 00:21:11.002 "trtype": "tcp", 00:21:11.002 "traddr": "10.0.0.2", 00:21:11.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:11.002 "adrfam": "ipv4", 00:21:11.002 "trsvcid": "4420", 00:21:11.002 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.002 "dhchap_key": "key3", 00:21:11.002 "method": "bdev_nvme_attach_controller", 00:21:11.002 "req_id": 1 00:21:11.002 } 00:21:11.002 Got JSON-RPC error response 00:21:11.002 response: 00:21:11.002 { 00:21:11.002 "code": -5, 00:21:11.002 "message": "Input/output error" 00:21:11.002 } 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.002 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:11.302 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:11.303 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:11.303 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:11.303 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.303 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:11.565 request: 00:21:11.565 { 00:21:11.565 "name": "nvme0", 00:21:11.565 "trtype": "tcp", 00:21:11.565 "traddr": "10.0.0.2", 00:21:11.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:11.565 "adrfam": "ipv4", 00:21:11.565 "trsvcid": "4420", 00:21:11.565 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.565 "dhchap_key": "key0", 00:21:11.565 "dhchap_ctrlr_key": "key1", 00:21:11.565 "method": "bdev_nvme_attach_controller", 00:21:11.565 "req_id": 1 00:21:11.565 } 00:21:11.565 Got JSON-RPC error response 00:21:11.565 response: 00:21:11.565 { 00:21:11.565 "code": -5, 00:21:11.565 "message": "Input/output error" 00:21:11.565 } 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:11.565 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:11.825 00:21:11.825 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:11.825 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:11.825 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.825 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.825 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.825 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1545648 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1545648 ']' 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1545648 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1545648 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1545648' 00:21:12.086 killing process with pid 1545648 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1545648 00:21:12.086 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1545648 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.346 rmmod nvme_tcp 00:21:12.346 rmmod nvme_fabrics 00:21:12.346 rmmod nvme_keyring 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1569393 ']' 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1569393 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1569393 ']' 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1569393 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:12.346 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1569393 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1569393' 00:21:12.606 killing process with pid 1569393 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1569393 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1569393 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.606 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.151 11:28:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.151 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WRX /tmp/spdk.key-sha256.fKn /tmp/spdk.key-sha384.KnJ /tmp/spdk.key-sha512.PcR /tmp/spdk.key-sha512.8XR /tmp/spdk.key-sha384.UlM /tmp/spdk.key-sha256.RoN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:15.151 00:21:15.151 real 2m29.703s 00:21:15.151 user 5m39.964s 00:21:15.151 sys 0m21.186s 00:21:15.151 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:15.151 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.151 ************************************ 00:21:15.151 END TEST nvmf_auth_target 00:21:15.151 ************************************ 00:21:15.151 11:28:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:15.151 11:28:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:15.151 11:28:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:15.151 11:28:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:15.151 11:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.151 ************************************ 00:21:15.151 START TEST nvmf_bdevio_no_huge 00:21:15.151 ************************************ 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:15.151 * Looking for test storage... 00:21:15.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.151 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.152 11:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:23.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:23.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:23.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:23.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.300 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:21:23.301 00:21:23.301 --- 10.0.0.2 ping statistics --- 00:21:23.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.301 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:21:23.301 00:21:23.301 --- 10.0.0.1 ping statistics --- 00:21:23.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.301 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1574553 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1574553 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 1574553 ']' 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:23.301 11:28:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.301 [2024-06-10 11:28:19.600467] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:21:23.301 [2024-06-10 11:28:19.600512] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:23.301 [2024-06-10 11:28:19.681065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.301 [2024-06-10 11:28:19.775845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.301 [2024-06-10 11:28:19.775894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.301 [2024-06-10 11:28:19.775902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.301 [2024-06-10 11:28:19.775908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.301 [2024-06-10 11:28:19.775914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.301 [2024-06-10 11:28:19.776079] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.301 [2024-06-10 11:28:19.776330] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:21:23.301 [2024-06-10 11:28:19.776386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.301 [2024-06-10 11:28:19.776386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.301 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.565 [2024-06-10 11:28:20.526445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.566 Malloc0 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.566 [2024-06-10 11:28:20.567775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.566 { 00:21:23.566 "params": { 00:21:23.566 "name": "Nvme$subsystem", 00:21:23.566 "trtype": "$TEST_TRANSPORT", 00:21:23.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.566 "adrfam": "ipv4", 00:21:23.566 "trsvcid": "$NVMF_PORT", 00:21:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.566 "hdgst": ${hdgst:-false}, 00:21:23.566 "ddgst": ${ddgst:-false} 00:21:23.566 }, 00:21:23.566 "method": "bdev_nvme_attach_controller" 00:21:23.566 } 00:21:23.566 EOF 00:21:23.566 )") 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:23.566 11:28:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.566 "params": { 00:21:23.566 "name": "Nvme1", 00:21:23.566 "trtype": "tcp", 00:21:23.566 "traddr": "10.0.0.2", 00:21:23.566 "adrfam": "ipv4", 00:21:23.566 "trsvcid": "4420", 00:21:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.566 "hdgst": false, 00:21:23.566 "ddgst": false 00:21:23.566 }, 00:21:23.566 "method": "bdev_nvme_attach_controller" 00:21:23.566 }' 00:21:23.566 [2024-06-10 11:28:20.620097] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:21:23.566 [2024-06-10 11:28:20.620163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1574736 ] 00:21:23.566 [2024-06-10 11:28:20.713168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.826 [2024-06-10 11:28:20.813912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.826 [2024-06-10 11:28:20.814064] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.826 [2024-06-10 11:28:20.814068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.087 I/O targets: 00:21:24.087 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:24.087 00:21:24.087 00:21:24.087 CUnit - A unit testing framework for C - Version 2.1-3 00:21:24.087 http://cunit.sourceforge.net/ 00:21:24.087 00:21:24.087 00:21:24.087 Suite: bdevio tests on: Nvme1n1 00:21:24.087 Test: blockdev write read block ...passed 00:21:24.087 Test: blockdev write zeroes read block ...passed 00:21:24.087 Test: blockdev write zeroes read no split ...passed 00:21:24.087 Test: blockdev write zeroes read split ...passed 00:21:24.087 Test: blockdev write zeroes read split partial ...passed 00:21:24.087 Test: blockdev reset ...[2024-06-10 11:28:21.290231] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.087 [2024-06-10 11:28:21.290292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8900 (9): Bad file descriptor 00:21:24.087 [2024-06-10 11:28:21.309576] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.087 passed 00:21:24.087 Test: blockdev write read 8 blocks ...passed 00:21:24.348 Test: blockdev write read size > 128k ...passed 00:21:24.348 Test: blockdev write read invalid size ...passed 00:21:24.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:24.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:24.348 Test: blockdev write read max offset ...passed 00:21:24.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:24.348 Test: blockdev writev readv 8 blocks ...passed 00:21:24.348 Test: blockdev writev readv 30 x 1block ...passed 00:21:24.348 Test: blockdev writev readv block ...passed 00:21:24.348 Test: blockdev writev readv size > 128k ...passed 00:21:24.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:24.348 Test: blockdev comparev and writev ...[2024-06-10 11:28:21.534567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.534604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.534610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.535100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.535108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.535123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.535595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.535603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.535621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.536091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.348 [2024-06-10 11:28:21.536109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.348 [2024-06-10 11:28:21.536114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:24.610 passed 00:21:24.610 Test: blockdev nvme passthru rw ...passed 00:21:24.610 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:28:21.620689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.610 [2024-06-10 11:28:21.620700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:24.610 [2024-06-10 11:28:21.621052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.610 [2024-06-10 11:28:21.621059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.610 [2024-06-10 11:28:21.621419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.610 [2024-06-10 11:28:21.621426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:24.610 [2024-06-10 11:28:21.621780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.610 [2024-06-10 11:28:21.621787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.610 passed 00:21:24.610 Test: blockdev nvme admin passthru ...passed 00:21:24.610 Test: blockdev copy ...passed 00:21:24.610 00:21:24.610 Run Summary: Type Total Ran Passed Failed Inactive 00:21:24.610 suites 1 1 n/a 0 0 00:21:24.610 tests 23 23 23 0 0 00:21:24.610 asserts 152 152 152 0 n/a 00:21:24.610 00:21:24.610 Elapsed time = 1.201 seconds 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.871 rmmod nvme_tcp 00:21:24.871 rmmod nvme_fabrics 00:21:24.871 rmmod nvme_keyring 00:21:24.871 11:28:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1574553 ']' 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1574553 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 1574553 ']' 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 1574553 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1574553 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1574553' 00:21:24.871 killing process with pid 1574553 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 1574553 00:21:24.871 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 1574553 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.444 11:28:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.360 11:28:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.360 00:21:27.360 real 0m12.605s 00:21:27.360 user 0m14.554s 00:21:27.360 sys 0m6.653s 00:21:27.360 11:28:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:27.360 11:28:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.360 ************************************ 00:21:27.360 END TEST nvmf_bdevio_no_huge 00:21:27.360 ************************************ 00:21:27.360 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.360 11:28:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:27.360 11:28:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:27.360 11:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.360 ************************************ 00:21:27.360 START TEST nvmf_tls 00:21:27.361 ************************************ 00:21:27.361 11:28:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.620 * Looking for test storage... 00:21:27.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.620 11:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.620 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.621 11:28:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.765 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.766 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.766 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.766 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.027 11:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:21:36.027 00:21:36.027 --- 10.0.0.2 ping statistics --- 00:21:36.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.027 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:21:36.027 00:21:36.027 --- 10.0.0.1 ping statistics --- 00:21:36.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.027 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1579394 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1579394 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1579394 ']' 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:36.027 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.027 [2024-06-10 11:28:33.112802] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:21:36.027 [2024-06-10 11:28:33.112875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.027 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.027 [2024-06-10 11:28:33.188319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.287 [2024-06-10 11:28:33.258055] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.287 [2024-06-10 11:28:33.258095] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.287 [2024-06-10 11:28:33.258107] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.287 [2024-06-10 11:28:33.258114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.287 [2024-06-10 11:28:33.258120] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.287 [2024-06-10 11:28:33.258139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.857 11:28:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.858 11:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:36.858 11:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:37.117 true 00:21:37.117 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.117 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:37.378 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:37.378 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:37.378 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:37.378 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.378 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:37.638 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:37.638 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:37.638 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:37.898 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.898 11:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:38.158 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:38.418 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.418 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:38.677 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:38.677 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:38.677 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:38.677 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.677 11:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.936 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.937 11:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.UY1I2qiue9 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.j6QnPz1skZ 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.UY1I2qiue9 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.j6QnPz1skZ 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:39.196 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:39.454 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.UY1I2qiue9 00:21:39.454 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UY1I2qiue9 00:21:39.454 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.713 [2024-06-10 11:28:36.813279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.713 11:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.973 11:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.973 [2024-06-10 11:28:37.194239] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.973 [2024-06-10 11:28:37.194430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.233 11:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.233 malloc0 00:21:40.233 11:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.493 11:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UY1I2qiue9 00:21:40.753 [2024-06-10 11:28:37.738204] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.753 11:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UY1I2qiue9 00:21:40.753 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.741 Initializing NVMe Controllers 00:21:50.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:50.741 Initialization complete. Launching workers. 00:21:50.741 ======================================================== 00:21:50.741 Latency(us) 00:21:50.741 Device Information : IOPS MiB/s Average min max 00:21:50.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14536.79 56.78 4403.20 1017.16 7760.88 00:21:50.741 ======================================================== 00:21:50.741 Total : 14536.79 56.78 4403.20 1017.16 7760.88 00:21:50.741 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UY1I2qiue9 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UY1I2qiue9' 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1581884 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1581884 /var/tmp/bdevperf.sock 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1581884 ']' 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:50.741 11:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.741 [2024-06-10 11:28:47.933732] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:21:50.741 [2024-06-10 11:28:47.933789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581884 ] 00:21:50.741 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.001 [2024-06-10 11:28:47.989774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.001 [2024-06-10 11:28:48.042661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.001 11:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:51.001 11:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:51.001 11:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UY1I2qiue9 00:21:51.306 [2024-06-10 11:28:48.289602] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.306 [2024-06-10 11:28:48.289665] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.306 TLSTESTn1 00:21:51.306 11:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.306 Running I/O for 10 seconds... 00:22:01.345 00:22:01.346 Latency(us) 00:22:01.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.346 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.346 Verification LBA range: start 0x0 length 0x2000 00:22:01.346 TLSTESTn1 : 10.02 3477.40 13.58 0.00 0.00 36764.83 6074.68 68560.74 00:22:01.346 =================================================================================================================== 00:22:01.346 Total : 3477.40 13.58 0.00 0.00 36764.83 6074.68 68560.74 00:22:01.346 0 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1581884 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1581884 ']' 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1581884 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:01.346 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1581884 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1581884' 00:22:01.607 killing process with pid 1581884 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1581884 00:22:01.607 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.607 00:22:01.607 Latency(us) 00:22:01.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.607 =================================================================================================================== 00:22:01.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.607 [2024-06-10 11:28:58.603080] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1581884 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j6QnPz1skZ 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j6QnPz1skZ 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j6QnPz1skZ 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.j6QnPz1skZ' 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1583691 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1583691 /var/tmp/bdevperf.sock 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1583691 ']' 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:01.607 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.607 [2024-06-10 11:28:58.775139] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:01.607 [2024-06-10 11:28:58.775214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583691 ] 00:22:01.607 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.868 [2024-06-10 11:28:58.833076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.868 [2024-06-10 11:28:58.885507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.868 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:01.868 11:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:01.868 11:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j6QnPz1skZ 00:22:02.128 [2024-06-10 11:28:59.144133] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.128 [2024-06-10 11:28:59.144195] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.128 [2024-06-10 11:28:59.154403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.128 [2024-06-10 11:28:59.155298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11678b0 (107): Transport endpoint is not connected 00:22:02.128 [2024-06-10 11:28:59.156292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11678b0 (9): Bad file descriptor 00:22:02.128 [2024-06-10 11:28:59.157294] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.128 [2024-06-10 11:28:59.157301] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.128 [2024-06-10 11:28:59.157307] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.128 request: 00:22:02.128 { 00:22:02.128 "name": "TLSTEST", 00:22:02.128 "trtype": "tcp", 00:22:02.128 "traddr": "10.0.0.2", 00:22:02.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.128 "adrfam": "ipv4", 00:22:02.128 "trsvcid": "4420", 00:22:02.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.128 "psk": "/tmp/tmp.j6QnPz1skZ", 00:22:02.128 "method": "bdev_nvme_attach_controller", 00:22:02.128 "req_id": 1 00:22:02.128 } 00:22:02.128 Got JSON-RPC error response 00:22:02.128 response: 00:22:02.128 { 00:22:02.128 "code": -5, 00:22:02.128 "message": "Input/output error" 00:22:02.128 } 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1583691 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1583691 ']' 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1583691 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1583691 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1583691' 00:22:02.128 killing process with pid 1583691 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1583691 00:22:02.128 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.128 00:22:02.128 Latency(us) 00:22:02.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.128 =================================================================================================================== 00:22:02.128 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.128 [2024-06-10 11:28:59.242417] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1583691 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:02.128 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UY1I2qiue9 00:22:02.129 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:02.129 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UY1I2qiue9 00:22:02.129 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UY1I2qiue9 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UY1I2qiue9' 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1583867 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1583867 /var/tmp/bdevperf.sock 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1583867 ']' 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.389 [2024-06-10 11:28:59.402797] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:02.389 [2024-06-10 11:28:59.402856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583867 ] 00:22:02.389 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.389 [2024-06-10 11:28:59.456099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.389 [2024-06-10 11:28:59.508586] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:02.389 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.UY1I2qiue9 00:22:02.650 [2024-06-10 11:28:59.755555] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.650 [2024-06-10 11:28:59.755615] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.650 [2024-06-10 11:28:59.765813] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.650 [2024-06-10 11:28:59.765839] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.650 [2024-06-10 11:28:59.765862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.650 [2024-06-10 11:28:59.766536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154d8b0 (107): Transport endpoint is not connected 00:22:02.650 [2024-06-10 11:28:59.767531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154d8b0 (9): Bad file descriptor 00:22:02.650 [2024-06-10 11:28:59.768533] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.650 [2024-06-10 11:28:59.768539] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.650 [2024-06-10 11:28:59.768546] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.650 request: 00:22:02.650 { 00:22:02.650 "name": "TLSTEST", 00:22:02.650 "trtype": "tcp", 00:22:02.650 "traddr": "10.0.0.2", 00:22:02.650 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.650 "adrfam": "ipv4", 00:22:02.650 "trsvcid": "4420", 00:22:02.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.650 "psk": "/tmp/tmp.UY1I2qiue9", 00:22:02.650 "method": "bdev_nvme_attach_controller", 00:22:02.650 "req_id": 1 00:22:02.650 } 00:22:02.650 Got JSON-RPC error response 00:22:02.650 response: 00:22:02.650 { 00:22:02.650 "code": -5, 00:22:02.650 "message": "Input/output error" 00:22:02.650 } 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1583867 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1583867 ']' 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1583867 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1583867 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:02.650 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:02.651 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1583867' 00:22:02.651 killing process with pid 1583867 00:22:02.651 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1583867 00:22:02.651 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.651 00:22:02.651 Latency(us) 00:22:02.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.651 =================================================================================================================== 00:22:02.651 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.651 [2024-06-10 11:28:59.839556] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.651 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1583867 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UY1I2qiue9 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UY1I2qiue9 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UY1I2qiue9 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UY1I2qiue9' 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1584003 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1584003 /var/tmp/bdevperf.sock 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1584003 ']' 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:02.911 11:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.911 [2024-06-10 11:29:00.003518] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:02.911 [2024-06-10 11:29:00.003572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584003 ] 00:22:02.911 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.911 [2024-06-10 11:29:00.060457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.911 [2024-06-10 11:29:00.115461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.172 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:03.172 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:03.172 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UY1I2qiue9 00:22:03.172 [2024-06-10 11:29:00.362125] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.172 [2024-06-10 11:29:00.362184] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.172 [2024-06-10 11:29:00.367333] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.172 [2024-06-10 11:29:00.367354] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.172 [2024-06-10 11:29:00.367378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.172 [2024-06-10 11:29:00.368167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d798b0 (107): Transport endpoint is not connected 00:22:03.172 [2024-06-10 11:29:00.369163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d798b0 (9): Bad file descriptor 00:22:03.172 [2024-06-10 11:29:00.370165] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:03.172 [2024-06-10 11:29:00.370172] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.172 [2024-06-10 11:29:00.370179] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:03.172 request: 00:22:03.172 { 00:22:03.172 "name": "TLSTEST", 00:22:03.172 "trtype": "tcp", 00:22:03.172 "traddr": "10.0.0.2", 00:22:03.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.172 "adrfam": "ipv4", 00:22:03.172 "trsvcid": "4420", 00:22:03.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.173 "psk": "/tmp/tmp.UY1I2qiue9", 00:22:03.173 "method": "bdev_nvme_attach_controller", 00:22:03.173 "req_id": 1 00:22:03.173 } 00:22:03.173 Got JSON-RPC error response 00:22:03.173 response: 00:22:03.173 { 00:22:03.173 "code": -5, 00:22:03.173 "message": "Input/output error" 00:22:03.173 } 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1584003 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1584003 ']' 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1584003 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:03.173 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1584003 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1584003' 00:22:03.434 killing process with pid 1584003 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1584003 00:22:03.434 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.434 00:22:03.434 Latency(us) 00:22:03.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.434 =================================================================================================================== 00:22:03.434 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.434 [2024-06-10 11:29:00.441154] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1584003 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1584021 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1584021 /var/tmp/bdevperf.sock 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1584021 ']' 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:03.434 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.434 [2024-06-10 11:29:00.596448] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:03.434 [2024-06-10 11:29:00.596500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584021 ] 00:22:03.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.434 [2024-06-10 11:29:00.650456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.695 [2024-06-10 11:29:00.702853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.695 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:03.695 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:03.695 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.956 [2024-06-10 11:29:00.948778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.956 [2024-06-10 11:29:00.950176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e270 (9): Bad file descriptor 00:22:03.956 [2024-06-10 11:29:00.951176] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.956 [2024-06-10 11:29:00.951183] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.956 [2024-06-10 11:29:00.951190] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.956 request: 00:22:03.956 { 00:22:03.956 "name": "TLSTEST", 00:22:03.956 "trtype": "tcp", 00:22:03.956 "traddr": "10.0.0.2", 00:22:03.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.956 "adrfam": "ipv4", 00:22:03.956 "trsvcid": "4420", 00:22:03.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.956 "method": "bdev_nvme_attach_controller", 00:22:03.956 "req_id": 1 00:22:03.956 } 00:22:03.956 Got JSON-RPC error response 00:22:03.956 response: 00:22:03.956 { 00:22:03.956 "code": -5, 00:22:03.956 "message": "Input/output error" 00:22:03.956 } 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1584021 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1584021 ']' 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1584021 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:03.956 11:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1584021 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1584021' 00:22:03.956 killing process with pid 1584021 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1584021 00:22:03.956 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.956 00:22:03.956 Latency(us) 00:22:03.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.956 =================================================================================================================== 00:22:03.956 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1584021 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1579394 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1579394 ']' 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1579394 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:03.956 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1579394 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1579394' 00:22:04.218 killing process with pid 1579394 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1579394 00:22:04.218 [2024-06-10 11:29:01.192584] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1579394 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.xUfSP8AAwf 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.xUfSP8AAwf 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1584313 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1584313 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1584313 ']' 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:04.218 11:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.479 [2024-06-10 11:29:01.446081] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:04.479 [2024-06-10 11:29:01.446143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.479 [2024-06-10 11:29:01.517831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.479 [2024-06-10 11:29:01.585464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.479 [2024-06-10 11:29:01.585503] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.479 [2024-06-10 11:29:01.585510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.479 [2024-06-10 11:29:01.585516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.479 [2024-06-10 11:29:01.585522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.479 [2024-06-10 11:29:01.585540] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xUfSP8AAwf 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.420 [2024-06-10 11:29:02.502881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.420 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.680 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.680 [2024-06-10 11:29:02.835720] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.680 [2024-06-10 11:29:02.835915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.680 11:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.941 malloc0 00:22:05.941 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.200 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:06.200 [2024-06-10 11:29:03.415768] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xUfSP8AAwf 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xUfSP8AAwf' 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1584661 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1584661 /var/tmp/bdevperf.sock 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1584661 ']' 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.461 [2024-06-10 11:29:03.459910] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:06.461 [2024-06-10 11:29:03.459959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584661 ] 00:22:06.461 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.461 [2024-06-10 11:29:03.514550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.461 [2024-06-10 11:29:03.567092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:06.461 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:06.721 [2024-06-10 11:29:03.773663] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.721 [2024-06-10 11:29:03.773729] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.721 TLSTESTn1 00:22:06.721 11:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.721 Running I/O for 10 seconds... 00:22:18.947 00:22:18.947 Latency(us) 00:22:18.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.947 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.947 Verification LBA range: start 0x0 length 0x2000 00:22:18.947 TLSTESTn1 : 10.02 3543.19 13.84 0.00 0.00 36073.40 6402.36 253271.43 00:22:18.947 =================================================================================================================== 00:22:18.947 Total : 3543.19 13.84 0.00 0.00 36073.40 6402.36 253271.43 00:22:18.947 0 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1584661 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1584661 ']' 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1584661 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1584661 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1584661' 00:22:18.947 killing process with pid 1584661 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1584661 00:22:18.947 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.947 00:22:18.947 Latency(us) 00:22:18.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.947 =================================================================================================================== 00:22:18.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.947 [2024-06-10 11:29:14.058721] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1584661 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.xUfSP8AAwf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xUfSP8AAwf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xUfSP8AAwf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xUfSP8AAwf 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xUfSP8AAwf' 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1586393 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1586393 /var/tmp/bdevperf.sock 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1586393 ']' 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:18.947 11:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.947 [2024-06-10 11:29:14.225942] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:18.947 [2024-06-10 11:29:14.225995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586393 ] 00:22:18.947 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.947 [2024-06-10 11:29:14.281367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.948 [2024-06-10 11:29:14.333748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:18.948 [2024-06-10 11:29:15.142245] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.948 [2024-06-10 11:29:15.142291] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:18.948 [2024-06-10 11:29:15.142296] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.xUfSP8AAwf 00:22:18.948 request: 00:22:18.948 { 00:22:18.948 "name": "TLSTEST", 00:22:18.948 "trtype": "tcp", 00:22:18.948 "traddr": "10.0.0.2", 00:22:18.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.948 "adrfam": "ipv4", 00:22:18.948 "trsvcid": "4420", 00:22:18.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.948 "psk": "/tmp/tmp.xUfSP8AAwf", 00:22:18.948 "method": "bdev_nvme_attach_controller", 00:22:18.948 "req_id": 1 00:22:18.948 } 00:22:18.948 Got JSON-RPC error response 00:22:18.948 response: 00:22:18.948 { 00:22:18.948 "code": -1, 00:22:18.948 "message": "Operation not permitted" 00:22:18.948 } 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1586393 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1586393 ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1586393 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1586393 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1586393' 00:22:18.948 killing process with pid 1586393 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1586393 00:22:18.948 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.948 00:22:18.948 Latency(us) 00:22:18.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.948 =================================================================================================================== 00:22:18.948 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1586393 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1584313 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1584313 ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1584313 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1584313 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1584313' 00:22:18.948 killing process with pid 1584313 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1584313 00:22:18.948 [2024-06-10 11:29:15.378149] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1584313 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1586532 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1586532 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1586532 ']' 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 [2024-06-10 11:29:15.566962] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:18.948 [2024-06-10 11:29:15.567014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.948 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.948 [2024-06-10 11:29:15.636103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.948 [2024-06-10 11:29:15.694493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.948 [2024-06-10 11:29:15.694530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.948 [2024-06-10 11:29:15.694537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.948 [2024-06-10 11:29:15.694544] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.948 [2024-06-10 11:29:15.694550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.948 [2024-06-10 11:29:15.694575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xUfSP8AAwf 00:22:18.948 11:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.948 [2024-06-10 11:29:15.989941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.948 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.948 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.208 [2024-06-10 11:29:16.338804] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.208 [2024-06-10 11:29:16.338988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.208 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.468 malloc0 00:22:19.468 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:19.727 [2024-06-10 11:29:16.894575] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:19.727 [2024-06-10 11:29:16.894603] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:19.727 [2024-06-10 11:29:16.894627] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:19.727 request: 00:22:19.727 { 00:22:19.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.727 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.727 "psk": "/tmp/tmp.xUfSP8AAwf", 00:22:19.727 "method": "nvmf_subsystem_add_host", 00:22:19.727 "req_id": 1 00:22:19.727 } 00:22:19.727 Got JSON-RPC error response 00:22:19.727 response: 00:22:19.727 { 00:22:19.727 "code": -32603, 00:22:19.727 "message": "Internal error" 00:22:19.727 } 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1586532 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1586532 ']' 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1586532 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:19.727 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1586532 00:22:19.987 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:19.987 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:19.987 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1586532' 00:22:19.987 killing process with pid 1586532 00:22:19.987 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1586532 00:22:19.987 11:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1586532 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.xUfSP8AAwf 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1586849 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1586849 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1586849 ']' 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:19.987 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.987 [2024-06-10 11:29:17.134443] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:19.987 [2024-06-10 11:29:17.134491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.987 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.987 [2024-06-10 11:29:17.193052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.247 [2024-06-10 11:29:17.252606] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.247 [2024-06-10 11:29:17.252639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.247 [2024-06-10 11:29:17.252647] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.247 [2024-06-10 11:29:17.252656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.247 [2024-06-10 11:29:17.252662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.247 [2024-06-10 11:29:17.252679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xUfSP8AAwf 00:22:20.247 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.506 [2024-06-10 11:29:17.539578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.506 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:20.506 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:20.766 [2024-06-10 11:29:17.908494] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.766 [2024-06-10 11:29:17.908682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.766 11:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.026 malloc0 00:22:21.026 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:21.286 [2024-06-10 11:29:18.464444] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1587171 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1587171 /var/tmp/bdevperf.sock 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1587171 ']' 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:21.286 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.286 [2024-06-10 11:29:18.505835] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:21.286 [2024-06-10 11:29:18.505880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587171 ] 00:22:21.547 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.547 [2024-06-10 11:29:18.559246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.547 [2024-06-10 11:29:18.612133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.547 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:21.547 11:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:21.547 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:21.808 [2024-06-10 11:29:18.870833] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.808 [2024-06-10 11:29:18.870891] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.808 TLSTESTn1 00:22:21.808 11:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:22.069 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:22.069 "subsystems": [ 00:22:22.069 { 00:22:22.069 "subsystem": "keyring", 00:22:22.069 "config": [] 00:22:22.069 }, 00:22:22.069 { 00:22:22.069 "subsystem": "iobuf", 00:22:22.069 "config": [ 00:22:22.069 { 00:22:22.069 "method": "iobuf_set_options", 00:22:22.069 "params": { 00:22:22.069 "small_pool_count": 8192, 00:22:22.069 "large_pool_count": 1024, 00:22:22.069 "small_bufsize": 8192, 00:22:22.069 "large_bufsize": 135168 00:22:22.069 } 00:22:22.069 } 00:22:22.069 ] 00:22:22.069 }, 00:22:22.069 { 00:22:22.069 "subsystem": "sock", 00:22:22.069 "config": [ 00:22:22.069 { 00:22:22.069 "method": "sock_set_default_impl", 00:22:22.069 "params": { 00:22:22.069 "impl_name": "posix" 00:22:22.069 } 00:22:22.069 }, 00:22:22.069 { 00:22:22.069 "method": "sock_impl_set_options", 00:22:22.069 "params": { 00:22:22.069 "impl_name": "ssl", 00:22:22.069 "recv_buf_size": 4096, 00:22:22.070 "send_buf_size": 4096, 00:22:22.070 "enable_recv_pipe": true, 00:22:22.070 "enable_quickack": false, 00:22:22.070 "enable_placement_id": 0, 00:22:22.070 "enable_zerocopy_send_server": true, 00:22:22.070 "enable_zerocopy_send_client": false, 00:22:22.070 "zerocopy_threshold": 0, 00:22:22.070 "tls_version": 0, 00:22:22.070 "enable_ktls": false 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "sock_impl_set_options", 00:22:22.070 "params": { 00:22:22.070 "impl_name": "posix", 00:22:22.070 "recv_buf_size": 2097152, 00:22:22.070 "send_buf_size": 2097152, 00:22:22.070 "enable_recv_pipe": true, 00:22:22.070 "enable_quickack": false, 00:22:22.070 "enable_placement_id": 0, 00:22:22.070 "enable_zerocopy_send_server": true, 00:22:22.070 "enable_zerocopy_send_client": false, 00:22:22.070 "zerocopy_threshold": 0, 00:22:22.070 "tls_version": 0, 00:22:22.070 "enable_ktls": false 00:22:22.070 } 00:22:22.070 } 00:22:22.070 ] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "vmd", 00:22:22.070 "config": [] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "accel", 00:22:22.070 "config": [ 00:22:22.070 { 00:22:22.070 "method": "accel_set_options", 00:22:22.070 "params": { 00:22:22.070 "small_cache_size": 128, 00:22:22.070 "large_cache_size": 16, 00:22:22.070 "task_count": 2048, 00:22:22.070 "sequence_count": 2048, 00:22:22.070 "buf_count": 2048 00:22:22.070 } 00:22:22.070 } 00:22:22.070 ] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "bdev", 00:22:22.070 "config": [ 00:22:22.070 { 00:22:22.070 "method": "bdev_set_options", 00:22:22.070 "params": { 00:22:22.070 "bdev_io_pool_size": 65535, 00:22:22.070 "bdev_io_cache_size": 256, 00:22:22.070 "bdev_auto_examine": true, 00:22:22.070 "iobuf_small_cache_size": 128, 00:22:22.070 "iobuf_large_cache_size": 16 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_raid_set_options", 00:22:22.070 "params": { 00:22:22.070 "process_window_size_kb": 1024 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_iscsi_set_options", 00:22:22.070 "params": { 00:22:22.070 "timeout_sec": 30 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_nvme_set_options", 00:22:22.070 "params": { 00:22:22.070 "action_on_timeout": "none", 00:22:22.070 "timeout_us": 0, 00:22:22.070 "timeout_admin_us": 0, 00:22:22.070 "keep_alive_timeout_ms": 10000, 00:22:22.070 "arbitration_burst": 0, 00:22:22.070 "low_priority_weight": 0, 00:22:22.070 "medium_priority_weight": 0, 00:22:22.070 "high_priority_weight": 0, 00:22:22.070 "nvme_adminq_poll_period_us": 10000, 00:22:22.070 "nvme_ioq_poll_period_us": 0, 00:22:22.070 "io_queue_requests": 0, 00:22:22.070 "delay_cmd_submit": true, 00:22:22.070 "transport_retry_count": 4, 00:22:22.070 "bdev_retry_count": 3, 00:22:22.070 "transport_ack_timeout": 0, 00:22:22.070 "ctrlr_loss_timeout_sec": 0, 00:22:22.070 "reconnect_delay_sec": 0, 00:22:22.070 "fast_io_fail_timeout_sec": 0, 00:22:22.070 "disable_auto_failback": false, 00:22:22.070 "generate_uuids": false, 00:22:22.070 "transport_tos": 0, 00:22:22.070 "nvme_error_stat": false, 00:22:22.070 "rdma_srq_size": 0, 00:22:22.070 "io_path_stat": false, 00:22:22.070 "allow_accel_sequence": false, 00:22:22.070 "rdma_max_cq_size": 0, 00:22:22.070 "rdma_cm_event_timeout_ms": 0, 00:22:22.070 "dhchap_digests": [ 00:22:22.070 "sha256", 00:22:22.070 "sha384", 00:22:22.070 "sha512" 00:22:22.070 ], 00:22:22.070 "dhchap_dhgroups": [ 00:22:22.070 "null", 00:22:22.070 "ffdhe2048", 00:22:22.070 "ffdhe3072", 00:22:22.070 "ffdhe4096", 00:22:22.070 "ffdhe6144", 00:22:22.070 "ffdhe8192" 00:22:22.070 ] 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_nvme_set_hotplug", 00:22:22.070 "params": { 00:22:22.070 "period_us": 100000, 00:22:22.070 "enable": false 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_malloc_create", 00:22:22.070 "params": { 00:22:22.070 "name": "malloc0", 00:22:22.070 "num_blocks": 8192, 00:22:22.070 "block_size": 4096, 00:22:22.070 "physical_block_size": 4096, 00:22:22.070 "uuid": "c5da9b26-5247-461e-b073-1661c5ec8323", 00:22:22.070 "optimal_io_boundary": 0 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "bdev_wait_for_examine" 00:22:22.070 } 00:22:22.070 ] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "nbd", 00:22:22.070 "config": [] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "scheduler", 00:22:22.070 "config": [ 00:22:22.070 { 00:22:22.070 "method": "framework_set_scheduler", 00:22:22.070 "params": { 00:22:22.070 "name": "static" 00:22:22.070 } 00:22:22.070 } 00:22:22.070 ] 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "subsystem": "nvmf", 00:22:22.070 "config": [ 00:22:22.070 { 00:22:22.070 "method": "nvmf_set_config", 00:22:22.070 "params": { 00:22:22.070 "discovery_filter": "match_any", 00:22:22.070 "admin_cmd_passthru": { 00:22:22.070 "identify_ctrlr": false 00:22:22.070 } 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "nvmf_set_max_subsystems", 00:22:22.070 "params": { 00:22:22.070 "max_subsystems": 1024 00:22:22.070 } 00:22:22.070 }, 00:22:22.070 { 00:22:22.070 "method": "nvmf_set_crdt", 00:22:22.070 "params": { 00:22:22.070 "crdt1": 0, 00:22:22.070 "crdt2": 0, 00:22:22.070 "crdt3": 0 00:22:22.070 } 00:22:22.070 }, 00:22:22.071 { 00:22:22.071 "method": "nvmf_create_transport", 00:22:22.071 "params": { 00:22:22.071 "trtype": "TCP", 00:22:22.071 "max_queue_depth": 128, 00:22:22.071 "max_io_qpairs_per_ctrlr": 127, 00:22:22.071 "in_capsule_data_size": 4096, 00:22:22.071 "max_io_size": 131072, 00:22:22.071 "io_unit_size": 131072, 00:22:22.071 "max_aq_depth": 128, 00:22:22.071 "num_shared_buffers": 511, 00:22:22.071 "buf_cache_size": 4294967295, 00:22:22.071 "dif_insert_or_strip": false, 00:22:22.071 "zcopy": false, 00:22:22.071 "c2h_success": false, 00:22:22.071 "sock_priority": 0, 00:22:22.071 "abort_timeout_sec": 1, 00:22:22.071 "ack_timeout": 0, 00:22:22.071 "data_wr_pool_size": 0 00:22:22.071 } 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "method": "nvmf_create_subsystem", 00:22:22.071 "params": { 00:22:22.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.071 "allow_any_host": false, 00:22:22.071 "serial_number": "SPDK00000000000001", 00:22:22.071 "model_number": "SPDK bdev Controller", 00:22:22.071 "max_namespaces": 10, 00:22:22.071 "min_cntlid": 1, 00:22:22.071 "max_cntlid": 65519, 00:22:22.071 "ana_reporting": false 00:22:22.071 } 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "method": "nvmf_subsystem_add_host", 00:22:22.071 "params": { 00:22:22.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.071 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.071 "psk": "/tmp/tmp.xUfSP8AAwf" 00:22:22.071 } 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "method": "nvmf_subsystem_add_ns", 00:22:22.071 "params": { 00:22:22.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.071 "namespace": { 00:22:22.071 "nsid": 1, 00:22:22.071 "bdev_name": "malloc0", 00:22:22.071 "nguid": "C5DA9B265247461EB0731661C5EC8323", 00:22:22.071 "uuid": "c5da9b26-5247-461e-b073-1661c5ec8323", 00:22:22.071 "no_auto_visible": false 00:22:22.071 } 00:22:22.071 } 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "method": "nvmf_subsystem_add_listener", 00:22:22.071 "params": { 00:22:22.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.071 "listen_address": { 00:22:22.071 "trtype": "TCP", 00:22:22.071 "adrfam": "IPv4", 00:22:22.071 "traddr": "10.0.0.2", 00:22:22.071 "trsvcid": "4420" 00:22:22.071 }, 00:22:22.071 "secure_channel": true 00:22:22.071 } 00:22:22.071 } 00:22:22.071 ] 00:22:22.071 } 00:22:22.071 ] 00:22:22.071 }' 00:22:22.071 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:22.332 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:22.332 "subsystems": [ 00:22:22.332 { 00:22:22.332 "subsystem": "keyring", 00:22:22.332 "config": [] 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "subsystem": "iobuf", 00:22:22.332 "config": [ 00:22:22.332 { 00:22:22.332 "method": "iobuf_set_options", 00:22:22.332 "params": { 00:22:22.332 "small_pool_count": 8192, 00:22:22.332 "large_pool_count": 1024, 00:22:22.332 "small_bufsize": 8192, 00:22:22.332 "large_bufsize": 135168 00:22:22.332 } 00:22:22.332 } 00:22:22.332 ] 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "subsystem": "sock", 00:22:22.332 "config": [ 00:22:22.332 { 00:22:22.332 "method": "sock_set_default_impl", 00:22:22.332 "params": { 00:22:22.332 "impl_name": "posix" 00:22:22.332 } 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "method": "sock_impl_set_options", 00:22:22.332 "params": { 00:22:22.332 "impl_name": "ssl", 00:22:22.332 "recv_buf_size": 4096, 00:22:22.332 "send_buf_size": 4096, 00:22:22.332 "enable_recv_pipe": true, 00:22:22.332 "enable_quickack": false, 00:22:22.332 "enable_placement_id": 0, 00:22:22.332 "enable_zerocopy_send_server": true, 00:22:22.332 "enable_zerocopy_send_client": false, 00:22:22.332 "zerocopy_threshold": 0, 00:22:22.332 "tls_version": 0, 00:22:22.332 "enable_ktls": false 00:22:22.332 } 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "method": "sock_impl_set_options", 00:22:22.332 "params": { 00:22:22.332 "impl_name": "posix", 00:22:22.332 "recv_buf_size": 2097152, 00:22:22.332 "send_buf_size": 2097152, 00:22:22.332 "enable_recv_pipe": true, 00:22:22.332 "enable_quickack": false, 00:22:22.332 "enable_placement_id": 0, 00:22:22.332 "enable_zerocopy_send_server": true, 00:22:22.332 "enable_zerocopy_send_client": false, 00:22:22.332 "zerocopy_threshold": 0, 00:22:22.332 "tls_version": 0, 00:22:22.332 "enable_ktls": false 00:22:22.332 } 00:22:22.332 } 00:22:22.332 ] 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "subsystem": "vmd", 00:22:22.332 "config": [] 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "subsystem": "accel", 00:22:22.332 "config": [ 00:22:22.332 { 00:22:22.332 "method": "accel_set_options", 00:22:22.332 "params": { 00:22:22.332 "small_cache_size": 128, 00:22:22.332 "large_cache_size": 16, 00:22:22.332 "task_count": 2048, 00:22:22.332 "sequence_count": 2048, 00:22:22.332 "buf_count": 2048 00:22:22.332 } 00:22:22.332 } 00:22:22.332 ] 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "subsystem": "bdev", 00:22:22.332 "config": [ 00:22:22.332 { 00:22:22.332 "method": "bdev_set_options", 00:22:22.332 "params": { 00:22:22.332 "bdev_io_pool_size": 65535, 00:22:22.332 "bdev_io_cache_size": 256, 00:22:22.332 "bdev_auto_examine": true, 00:22:22.332 "iobuf_small_cache_size": 128, 00:22:22.332 "iobuf_large_cache_size": 16 00:22:22.332 } 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "method": "bdev_raid_set_options", 00:22:22.332 "params": { 00:22:22.332 "process_window_size_kb": 1024 00:22:22.332 } 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "method": "bdev_iscsi_set_options", 00:22:22.332 "params": { 00:22:22.332 "timeout_sec": 30 00:22:22.332 } 00:22:22.332 }, 00:22:22.332 { 00:22:22.332 "method": "bdev_nvme_set_options", 00:22:22.332 "params": { 00:22:22.332 "action_on_timeout": "none", 00:22:22.332 "timeout_us": 0, 00:22:22.332 "timeout_admin_us": 0, 00:22:22.332 "keep_alive_timeout_ms": 10000, 00:22:22.332 "arbitration_burst": 0, 00:22:22.332 "low_priority_weight": 0, 00:22:22.332 "medium_priority_weight": 0, 00:22:22.332 "high_priority_weight": 0, 00:22:22.332 "nvme_adminq_poll_period_us": 10000, 00:22:22.332 "nvme_ioq_poll_period_us": 0, 00:22:22.332 "io_queue_requests": 512, 00:22:22.332 "delay_cmd_submit": true, 00:22:22.332 "transport_retry_count": 4, 00:22:22.332 "bdev_retry_count": 3, 00:22:22.332 "transport_ack_timeout": 0, 00:22:22.332 "ctrlr_loss_timeout_sec": 0, 00:22:22.332 "reconnect_delay_sec": 0, 00:22:22.332 "fast_io_fail_timeout_sec": 0, 00:22:22.333 "disable_auto_failback": false, 00:22:22.333 "generate_uuids": false, 00:22:22.333 "transport_tos": 0, 00:22:22.333 "nvme_error_stat": false, 00:22:22.333 "rdma_srq_size": 0, 00:22:22.333 "io_path_stat": false, 00:22:22.333 "allow_accel_sequence": false, 00:22:22.333 "rdma_max_cq_size": 0, 00:22:22.333 "rdma_cm_event_timeout_ms": 0, 00:22:22.333 "dhchap_digests": [ 00:22:22.333 "sha256", 00:22:22.333 "sha384", 00:22:22.333 "sha512" 00:22:22.333 ], 00:22:22.333 "dhchap_dhgroups": [ 00:22:22.333 "null", 00:22:22.333 "ffdhe2048", 00:22:22.333 "ffdhe3072", 00:22:22.333 "ffdhe4096", 00:22:22.333 "ffdhe6144", 00:22:22.333 "ffdhe8192" 00:22:22.333 ] 00:22:22.333 } 00:22:22.333 }, 00:22:22.333 { 00:22:22.333 "method": "bdev_nvme_attach_controller", 00:22:22.333 "params": { 00:22:22.333 "name": "TLSTEST", 00:22:22.333 "trtype": "TCP", 00:22:22.333 "adrfam": "IPv4", 00:22:22.333 "traddr": "10.0.0.2", 00:22:22.333 "trsvcid": "4420", 00:22:22.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.333 "prchk_reftag": false, 00:22:22.333 "prchk_guard": false, 00:22:22.333 "ctrlr_loss_timeout_sec": 0, 00:22:22.333 "reconnect_delay_sec": 0, 00:22:22.333 "fast_io_fail_timeout_sec": 0, 00:22:22.333 "psk": "/tmp/tmp.xUfSP8AAwf", 00:22:22.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.333 "hdgst": false, 00:22:22.333 "ddgst": false 00:22:22.333 } 00:22:22.333 }, 00:22:22.333 { 00:22:22.333 "method": "bdev_nvme_set_hotplug", 00:22:22.333 "params": { 00:22:22.333 "period_us": 100000, 00:22:22.333 "enable": false 00:22:22.333 } 00:22:22.333 }, 00:22:22.333 { 00:22:22.333 "method": "bdev_wait_for_examine" 00:22:22.333 } 00:22:22.333 ] 00:22:22.333 }, 00:22:22.333 { 00:22:22.333 "subsystem": "nbd", 00:22:22.333 "config": [] 00:22:22.333 } 00:22:22.333 ] 00:22:22.333 }' 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1587171 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1587171 ']' 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1587171 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:22.333 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1587171 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1587171' 00:22:22.594 killing process with pid 1587171 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1587171 00:22:22.594 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.594 00:22:22.594 Latency(us) 00:22:22.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.594 =================================================================================================================== 00:22:22.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.594 [2024-06-10 11:29:19.574039] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1587171 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1586849 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1586849 ']' 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1586849 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1586849 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1586849' 00:22:22.594 killing process with pid 1586849 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1586849 00:22:22.594 [2024-06-10 11:29:19.738192] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:22.594 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1586849 00:22:22.853 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:22.853 11:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.854 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:22.854 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.854 11:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:22.854 "subsystems": [ 00:22:22.854 { 00:22:22.854 "subsystem": "keyring", 00:22:22.854 "config": [] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "iobuf", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "iobuf_set_options", 00:22:22.854 "params": { 00:22:22.854 "small_pool_count": 8192, 00:22:22.854 "large_pool_count": 1024, 00:22:22.854 "small_bufsize": 8192, 00:22:22.854 "large_bufsize": 135168 00:22:22.854 } 00:22:22.854 } 00:22:22.854 ] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "sock", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "sock_set_default_impl", 00:22:22.854 "params": { 00:22:22.854 "impl_name": "posix" 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "sock_impl_set_options", 00:22:22.854 "params": { 00:22:22.854 "impl_name": "ssl", 00:22:22.854 "recv_buf_size": 4096, 00:22:22.854 "send_buf_size": 4096, 00:22:22.854 "enable_recv_pipe": true, 00:22:22.854 "enable_quickack": false, 00:22:22.854 "enable_placement_id": 0, 00:22:22.854 "enable_zerocopy_send_server": true, 00:22:22.854 "enable_zerocopy_send_client": false, 00:22:22.854 "zerocopy_threshold": 0, 00:22:22.854 "tls_version": 0, 00:22:22.854 "enable_ktls": false 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "sock_impl_set_options", 00:22:22.854 "params": { 00:22:22.854 "impl_name": "posix", 00:22:22.854 "recv_buf_size": 2097152, 00:22:22.854 "send_buf_size": 2097152, 00:22:22.854 "enable_recv_pipe": true, 00:22:22.854 "enable_quickack": false, 00:22:22.854 "enable_placement_id": 0, 00:22:22.854 "enable_zerocopy_send_server": true, 00:22:22.854 "enable_zerocopy_send_client": false, 00:22:22.854 "zerocopy_threshold": 0, 00:22:22.854 "tls_version": 0, 00:22:22.854 "enable_ktls": false 00:22:22.854 } 00:22:22.854 } 00:22:22.854 ] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "vmd", 00:22:22.854 "config": [] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "accel", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "accel_set_options", 00:22:22.854 "params": { 00:22:22.854 "small_cache_size": 128, 00:22:22.854 "large_cache_size": 16, 00:22:22.854 "task_count": 2048, 00:22:22.854 "sequence_count": 2048, 00:22:22.854 "buf_count": 2048 00:22:22.854 } 00:22:22.854 } 00:22:22.854 ] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "bdev", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "bdev_set_options", 00:22:22.854 "params": { 00:22:22.854 "bdev_io_pool_size": 65535, 00:22:22.854 "bdev_io_cache_size": 256, 00:22:22.854 "bdev_auto_examine": true, 00:22:22.854 "iobuf_small_cache_size": 128, 00:22:22.854 "iobuf_large_cache_size": 16 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_raid_set_options", 00:22:22.854 "params": { 00:22:22.854 "process_window_size_kb": 1024 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_iscsi_set_options", 00:22:22.854 "params": { 00:22:22.854 "timeout_sec": 30 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_nvme_set_options", 00:22:22.854 "params": { 00:22:22.854 "action_on_timeout": "none", 00:22:22.854 "timeout_us": 0, 00:22:22.854 "timeout_admin_us": 0, 00:22:22.854 "keep_alive_timeout_ms": 10000, 00:22:22.854 "arbitration_burst": 0, 00:22:22.854 "low_priority_weight": 0, 00:22:22.854 "medium_priority_weight": 0, 00:22:22.854 "high_priority_weight": 0, 00:22:22.854 "nvme_adminq_poll_period_us": 10000, 00:22:22.854 "nvme_ioq_poll_period_us": 0, 00:22:22.854 "io_queue_requests": 0, 00:22:22.854 "delay_cmd_submit": true, 00:22:22.854 "transport_retry_count": 4, 00:22:22.854 "bdev_retry_count": 3, 00:22:22.854 "transport_ack_timeout": 0, 00:22:22.854 "ctrlr_loss_timeout_sec": 0, 00:22:22.854 "reconnect_delay_sec": 0, 00:22:22.854 "fast_io_fail_timeout_sec": 0, 00:22:22.854 "disable_auto_failback": false, 00:22:22.854 "generate_uuids": false, 00:22:22.854 "transport_tos": 0, 00:22:22.854 "nvme_error_stat": false, 00:22:22.854 "rdma_srq_size": 0, 00:22:22.854 "io_path_stat": false, 00:22:22.854 "allow_accel_sequence": false, 00:22:22.854 "rdma_max_cq_size": 0, 00:22:22.854 "rdma_cm_event_timeout_ms": 0, 00:22:22.854 "dhchap_digests": [ 00:22:22.854 "sha256", 00:22:22.854 "sha384", 00:22:22.854 "sha512" 00:22:22.854 ], 00:22:22.854 "dhchap_dhgroups": [ 00:22:22.854 "null", 00:22:22.854 "ffdhe2048", 00:22:22.854 "ffdhe3072", 00:22:22.854 "ffdhe4096", 00:22:22.854 "ffdhe6144", 00:22:22.854 "ffdhe8192" 00:22:22.854 ] 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_nvme_set_hotplug", 00:22:22.854 "params": { 00:22:22.854 "period_us": 100000, 00:22:22.854 "enable": false 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_malloc_create", 00:22:22.854 "params": { 00:22:22.854 "name": "malloc0", 00:22:22.854 "num_blocks": 8192, 00:22:22.854 "block_size": 4096, 00:22:22.854 "physical_block_size": 4096, 00:22:22.854 "uuid": "c5da9b26-5247-461e-b073-1661c5ec8323", 00:22:22.854 "optimal_io_boundary": 0 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "bdev_wait_for_examine" 00:22:22.854 } 00:22:22.854 ] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "nbd", 00:22:22.854 "config": [] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "scheduler", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "framework_set_scheduler", 00:22:22.854 "params": { 00:22:22.854 "name": "static" 00:22:22.854 } 00:22:22.854 } 00:22:22.854 ] 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "subsystem": "nvmf", 00:22:22.854 "config": [ 00:22:22.854 { 00:22:22.854 "method": "nvmf_set_config", 00:22:22.854 "params": { 00:22:22.854 "discovery_filter": "match_any", 00:22:22.854 "admin_cmd_passthru": { 00:22:22.854 "identify_ctrlr": false 00:22:22.854 } 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "nvmf_set_max_subsystems", 00:22:22.854 "params": { 00:22:22.854 "max_subsystems": 1024 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "nvmf_set_crdt", 00:22:22.854 "params": { 00:22:22.854 "crdt1": 0, 00:22:22.854 "crdt2": 0, 00:22:22.854 "crdt3": 0 00:22:22.854 } 00:22:22.854 }, 00:22:22.854 { 00:22:22.854 "method": "nvmf_create_transport", 00:22:22.854 "params": { 00:22:22.854 "trtype": "TCP", 00:22:22.854 "max_queue_depth": 128, 00:22:22.854 "max_io_qpairs_per_ctrlr": 127, 00:22:22.855 "in_capsule_data_size": 4096, 00:22:22.855 "max_io_size": 131072, 00:22:22.855 "io_unit_size": 131072, 00:22:22.855 "max_aq_depth": 128, 00:22:22.855 "num_shared_buffers": 511, 00:22:22.855 "buf_cache_size": 4294967295, 00:22:22.855 "dif_insert_or_strip": false, 00:22:22.855 "zcopy": false, 00:22:22.855 "c2h_success": false, 00:22:22.855 "sock_priority": 0, 00:22:22.855 "abort_timeout_sec": 1, 00:22:22.855 "ack_timeout": 0, 00:22:22.855 "data_wr_pool_size": 0 00:22:22.855 } 00:22:22.855 }, 00:22:22.855 { 00:22:22.855 "method": "nvmf_create_subsystem", 00:22:22.855 "params": { 00:22:22.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.855 "allow_any_host": false, 00:22:22.855 "serial_number": "SPDK00000000000001", 00:22:22.855 "model_number": "SPDK bdev Controller", 00:22:22.855 "max_namespaces": 10, 00:22:22.855 "min_cntlid": 1, 00:22:22.855 "max_cntlid": 65519, 00:22:22.855 "ana_reporting": false 00:22:22.855 } 00:22:22.855 }, 00:22:22.855 { 00:22:22.855 "method": "nvmf_subsystem_add_host", 00:22:22.855 "params": { 00:22:22.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.855 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.855 "psk": "/tmp/tmp.xUfSP8AAwf" 00:22:22.855 } 00:22:22.855 }, 00:22:22.855 { 00:22:22.855 "method": "nvmf_subsystem_add_ns", 00:22:22.855 "params": { 00:22:22.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.855 "namespace": { 00:22:22.855 "nsid": 1, 00:22:22.855 "bdev_name": "malloc0", 00:22:22.855 "nguid": "C5DA9B265247461EB0731661C5EC8323", 00:22:22.855 "uuid": "c5da9b26-5247-461e-b073-1661c5ec8323", 00:22:22.855 "no_auto_visible": false 00:22:22.855 } 00:22:22.855 } 00:22:22.855 }, 00:22:22.855 { 00:22:22.855 "method": "nvmf_subsystem_add_listener", 00:22:22.855 "params": { 00:22:22.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.855 "listen_address": { 00:22:22.855 "trtype": "TCP", 00:22:22.855 "adrfam": "IPv4", 00:22:22.855 "traddr": "10.0.0.2", 00:22:22.855 "trsvcid": "4420" 00:22:22.855 }, 00:22:22.855 "secure_channel": true 00:22:22.855 } 00:22:22.855 } 00:22:22.855 ] 00:22:22.855 } 00:22:22.855 ] 00:22:22.855 }' 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1587488 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1587488 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1587488 ']' 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:22.855 11:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.855 [2024-06-10 11:29:19.930596] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:22.855 [2024-06-10 11:29:19.930650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.855 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.855 [2024-06-10 11:29:20.000623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.855 [2024-06-10 11:29:20.065007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.855 [2024-06-10 11:29:20.065047] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.855 [2024-06-10 11:29:20.065054] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.855 [2024-06-10 11:29:20.065060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.855 [2024-06-10 11:29:20.065065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.855 [2024-06-10 11:29:20.065117] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.115 [2024-06-10 11:29:20.251314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.115 [2024-06-10 11:29:20.267260] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.115 [2024-06-10 11:29:20.283312] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.115 [2024-06-10 11:29:20.300138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1587522 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1587522 /var/tmp/bdevperf.sock 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1587522 ']' 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.685 11:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:23.685 "subsystems": [ 00:22:23.685 { 00:22:23.685 "subsystem": "keyring", 00:22:23.685 "config": [] 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "subsystem": "iobuf", 00:22:23.685 "config": [ 00:22:23.685 { 00:22:23.685 "method": "iobuf_set_options", 00:22:23.685 "params": { 00:22:23.685 "small_pool_count": 8192, 00:22:23.685 "large_pool_count": 1024, 00:22:23.685 "small_bufsize": 8192, 00:22:23.685 "large_bufsize": 135168 00:22:23.685 } 00:22:23.685 } 00:22:23.685 ] 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "subsystem": "sock", 00:22:23.685 "config": [ 00:22:23.685 { 00:22:23.685 "method": "sock_set_default_impl", 00:22:23.685 "params": { 00:22:23.685 "impl_name": "posix" 00:22:23.685 } 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "method": "sock_impl_set_options", 00:22:23.685 "params": { 00:22:23.685 "impl_name": "ssl", 00:22:23.685 "recv_buf_size": 4096, 00:22:23.685 "send_buf_size": 4096, 00:22:23.685 "enable_recv_pipe": true, 00:22:23.685 "enable_quickack": false, 00:22:23.685 "enable_placement_id": 0, 00:22:23.685 "enable_zerocopy_send_server": true, 00:22:23.685 "enable_zerocopy_send_client": false, 00:22:23.685 "zerocopy_threshold": 0, 00:22:23.685 "tls_version": 0, 00:22:23.685 "enable_ktls": false 00:22:23.685 } 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "method": "sock_impl_set_options", 00:22:23.685 "params": { 00:22:23.685 "impl_name": "posix", 00:22:23.685 "recv_buf_size": 2097152, 00:22:23.685 "send_buf_size": 2097152, 00:22:23.685 "enable_recv_pipe": true, 00:22:23.685 "enable_quickack": false, 00:22:23.685 "enable_placement_id": 0, 00:22:23.685 "enable_zerocopy_send_server": true, 00:22:23.685 "enable_zerocopy_send_client": false, 00:22:23.685 "zerocopy_threshold": 0, 00:22:23.685 "tls_version": 0, 00:22:23.685 "enable_ktls": false 00:22:23.685 } 00:22:23.685 } 00:22:23.685 ] 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "subsystem": "vmd", 00:22:23.685 "config": [] 00:22:23.685 }, 00:22:23.685 { 00:22:23.685 "subsystem": "accel", 00:22:23.685 "config": [ 00:22:23.685 { 00:22:23.685 "method": "accel_set_options", 00:22:23.685 "params": { 00:22:23.685 "small_cache_size": 128, 00:22:23.685 "large_cache_size": 16, 00:22:23.686 "task_count": 2048, 00:22:23.686 "sequence_count": 2048, 00:22:23.686 "buf_count": 2048 00:22:23.686 } 00:22:23.686 } 00:22:23.686 ] 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "subsystem": "bdev", 00:22:23.686 "config": [ 00:22:23.686 { 00:22:23.686 "method": "bdev_set_options", 00:22:23.686 "params": { 00:22:23.686 "bdev_io_pool_size": 65535, 00:22:23.686 "bdev_io_cache_size": 256, 00:22:23.686 "bdev_auto_examine": true, 00:22:23.686 "iobuf_small_cache_size": 128, 00:22:23.686 "iobuf_large_cache_size": 16 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_raid_set_options", 00:22:23.686 "params": { 00:22:23.686 "process_window_size_kb": 1024 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_iscsi_set_options", 00:22:23.686 "params": { 00:22:23.686 "timeout_sec": 30 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_nvme_set_options", 00:22:23.686 "params": { 00:22:23.686 "action_on_timeout": "none", 00:22:23.686 "timeout_us": 0, 00:22:23.686 "timeout_admin_us": 0, 00:22:23.686 "keep_alive_timeout_ms": 10000, 00:22:23.686 "arbitration_burst": 0, 00:22:23.686 "low_priority_weight": 0, 00:22:23.686 "medium_priority_weight": 0, 00:22:23.686 "high_priority_weight": 0, 00:22:23.686 "nvme_adminq_poll_period_us": 10000, 00:22:23.686 "nvme_ioq_poll_period_us": 0, 00:22:23.686 "io_queue_requests": 512, 00:22:23.686 "delay_cmd_submit": true, 00:22:23.686 "transport_retry_count": 4, 00:22:23.686 "bdev_retry_count": 3, 00:22:23.686 "transport_ack_timeout": 0, 00:22:23.686 "ctrlr_loss_timeout_sec": 0, 00:22:23.686 "reconnect_delay_sec": 0, 00:22:23.686 "fast_io_fail_timeout_sec": 0, 00:22:23.686 "disable_auto_failback": false, 00:22:23.686 "generate_uuids": false, 00:22:23.686 "transport_tos": 0, 00:22:23.686 "nvme_error_stat": false, 00:22:23.686 "rdma_srq_size": 0, 00:22:23.686 "io_path_stat": false, 00:22:23.686 "allow_accel_sequence": false, 00:22:23.686 "rdma_max_cq_size": 0, 00:22:23.686 "rdma_cm_event_timeout_ms": 0, 00:22:23.686 "dhchap_digests": [ 00:22:23.686 "sha256", 00:22:23.686 "sha384", 00:22:23.686 "sha512" 00:22:23.686 ], 00:22:23.686 "dhchap_dhgroups": [ 00:22:23.686 "null", 00:22:23.686 "ffdhe2048", 00:22:23.686 "ffdhe3072", 00:22:23.686 "ffdhe4096", 00:22:23.686 "ffdhe6144", 00:22:23.686 "ffdhe8192" 00:22:23.686 ] 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_nvme_attach_controller", 00:22:23.686 "params": { 00:22:23.686 "name": "TLSTEST", 00:22:23.686 "trtype": "TCP", 00:22:23.686 "adrfam": "IPv4", 00:22:23.686 "traddr": "10.0.0.2", 00:22:23.686 "trsvcid": "4420", 00:22:23.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.686 "prchk_reftag": false, 00:22:23.686 "prchk_guard": false, 00:22:23.686 "ctrlr_loss_timeout_sec": 0, 00:22:23.686 "reconnect_delay_sec": 0, 00:22:23.686 "fast_io_fail_timeout_sec": 0, 00:22:23.686 "psk": "/tmp/tmp.xUfSP8AAwf", 00:22:23.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.686 "hdgst": false, 00:22:23.686 "ddgst": false 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_nvme_set_hotplug", 00:22:23.686 "params": { 00:22:23.686 "period_us": 100000, 00:22:23.686 "enable": false 00:22:23.686 } 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "method": "bdev_wait_for_examine" 00:22:23.686 } 00:22:23.686 ] 00:22:23.686 }, 00:22:23.686 { 00:22:23.686 "subsystem": "nbd", 00:22:23.686 "config": [] 00:22:23.686 } 00:22:23.686 ] 00:22:23.686 }' 00:22:23.686 [2024-06-10 11:29:20.853954] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:23.686 [2024-06-10 11:29:20.854000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587522 ] 00:22:23.686 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.686 [2024-06-10 11:29:20.908106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.947 [2024-06-10 11:29:20.961000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.947 [2024-06-10 11:29:21.085440] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.947 [2024-06-10 11:29:21.085508] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:24.517 11:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:24.517 11:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:24.517 11:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:24.777 Running I/O for 10 seconds... 00:22:34.890 00:22:34.890 Latency(us) 00:22:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:34.890 Verification LBA range: start 0x0 length 0x2000 00:22:34.890 TLSTESTn1 : 10.08 3687.68 14.41 0.00 0.00 34609.62 5847.83 82676.18 00:22:34.890 =================================================================================================================== 00:22:34.890 Total : 3687.68 14.41 0.00 0.00 34609.62 5847.83 82676.18 00:22:34.890 0 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1587522 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1587522 ']' 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1587522 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1587522 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1587522' 00:22:34.890 killing process with pid 1587522 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1587522 00:22:34.890 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.890 00:22:34.890 Latency(us) 00:22:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.890 =================================================================================================================== 00:22:34.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.890 [2024-06-10 11:29:31.962460] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:34.890 11:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1587522 00:22:34.890 11:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1587488 00:22:34.890 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1587488 ']' 00:22:34.890 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1587488 00:22:35.149 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1587488 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1587488' 00:22:35.150 killing process with pid 1587488 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1587488 00:22:35.150 [2024-06-10 11:29:32.128432] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1587488 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1589375 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1589375 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1589375 ']' 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:35.150 11:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.150 [2024-06-10 11:29:32.323490] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:35.150 [2024-06-10 11:29:32.323564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.150 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.410 [2024-06-10 11:29:32.418718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.410 [2024-06-10 11:29:32.508370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.410 [2024-06-10 11:29:32.508429] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.410 [2024-06-10 11:29:32.508437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.410 [2024-06-10 11:29:32.508443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.410 [2024-06-10 11:29:32.508450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.410 [2024-06-10 11:29:32.508477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.981 11:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:35.981 11:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:35.981 11:29:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.981 11:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:35.981 11:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.242 11:29:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.242 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.xUfSP8AAwf 00:22:36.242 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xUfSP8AAwf 00:22:36.242 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.242 [2024-06-10 11:29:33.407098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.242 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:36.502 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.763 [2024-06-10 11:29:33.812125] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.763 [2024-06-10 11:29:33.812405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.763 11:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:37.023 malloc0 00:22:37.023 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.023 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xUfSP8AAwf 00:22:37.284 [2024-06-10 11:29:34.407603] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1589743 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1589743 /var/tmp/bdevperf.sock 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1589743 ']' 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:37.284 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.284 [2024-06-10 11:29:34.468063] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:37.284 [2024-06-10 11:29:34.468134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589743 ] 00:22:37.284 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.545 [2024-06-10 11:29:34.542621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.545 [2024-06-10 11:29:34.613620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.545 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:37.545 11:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:37.545 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xUfSP8AAwf 00:22:37.805 11:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:38.065 [2024-06-10 11:29:35.054376] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.065 nvme0n1 00:22:38.065 11:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.065 Running I/O for 1 seconds... 00:22:39.448 00:22:39.448 Latency(us) 00:22:39.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:39.448 Verification LBA range: start 0x0 length 0x2000 00:22:39.448 nvme0n1 : 1.02 4127.47 16.12 0.00 0.00 30719.51 5873.03 87515.77 00:22:39.448 =================================================================================================================== 00:22:39.448 Total : 4127.47 16.12 0.00 0.00 30719.51 5873.03 87515.77 00:22:39.448 0 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1589743 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1589743 ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1589743 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1589743 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1589743' 00:22:39.448 killing process with pid 1589743 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1589743 00:22:39.448 Received shutdown signal, test time was about 1.000000 seconds 00:22:39.448 00:22:39.448 Latency(us) 00:22:39.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.448 =================================================================================================================== 00:22:39.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1589743 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1589375 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1589375 ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1589375 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1589375 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1589375' 00:22:39.448 killing process with pid 1589375 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1589375 00:22:39.448 [2024-06-10 11:29:36.500744] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1589375 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1590248 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1590248 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1590248 ']' 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:39.448 11:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.709 [2024-06-10 11:29:36.704186] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:39.709 [2024-06-10 11:29:36.704245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.709 [2024-06-10 11:29:36.795971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.709 [2024-06-10 11:29:36.886584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.709 [2024-06-10 11:29:36.886649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.709 [2024-06-10 11:29:36.886657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.709 [2024-06-10 11:29:36.886663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.709 [2024-06-10 11:29:36.886671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.709 [2024-06-10 11:29:36.886698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.651 [2024-06-10 11:29:37.616459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.651 malloc0 00:22:40.651 [2024-06-10 11:29:37.646439] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.651 [2024-06-10 11:29:37.646745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1590330 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1590330 /var/tmp/bdevperf.sock 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1590330 ']' 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.651 11:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:40.651 [2024-06-10 11:29:37.725033] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:40.651 [2024-06-10 11:29:37.725092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590330 ] 00:22:40.651 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.651 [2024-06-10 11:29:37.791878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.651 [2024-06-10 11:29:37.862323] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.911 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:40.911 11:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:40.911 11:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xUfSP8AAwf 00:22:41.171 11:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:41.171 [2024-06-10 11:29:38.319153] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.171 nvme0n1 00:22:41.431 11:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.431 Running I/O for 1 seconds... 00:22:42.370 00:22:42.370 Latency(us) 00:22:42.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.370 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:42.370 Verification LBA range: start 0x0 length 0x2000 00:22:42.370 nvme0n1 : 1.02 3452.99 13.49 0.00 0.00 36699.80 5772.21 58881.58 00:22:42.370 =================================================================================================================== 00:22:42.370 Total : 3452.99 13.49 0.00 0.00 36699.80 5772.21 58881.58 00:22:42.370 0 00:22:42.370 11:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:42.370 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.370 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.631 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.631 11:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:42.631 "subsystems": [ 00:22:42.631 { 00:22:42.631 "subsystem": "keyring", 00:22:42.631 "config": [ 00:22:42.631 { 00:22:42.631 "method": "keyring_file_add_key", 00:22:42.631 "params": { 00:22:42.631 "name": "key0", 00:22:42.631 "path": "/tmp/tmp.xUfSP8AAwf" 00:22:42.631 } 00:22:42.631 } 00:22:42.631 ] 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "subsystem": "iobuf", 00:22:42.631 "config": [ 00:22:42.631 { 00:22:42.631 "method": "iobuf_set_options", 00:22:42.631 "params": { 00:22:42.631 "small_pool_count": 8192, 00:22:42.631 "large_pool_count": 1024, 00:22:42.631 "small_bufsize": 8192, 00:22:42.631 "large_bufsize": 135168 00:22:42.631 } 00:22:42.631 } 00:22:42.631 ] 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "subsystem": "sock", 00:22:42.631 "config": [ 00:22:42.631 { 00:22:42.631 "method": "sock_set_default_impl", 00:22:42.631 "params": { 00:22:42.631 "impl_name": "posix" 00:22:42.631 } 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "method": "sock_impl_set_options", 00:22:42.631 "params": { 00:22:42.631 "impl_name": "ssl", 00:22:42.631 "recv_buf_size": 4096, 00:22:42.631 "send_buf_size": 4096, 00:22:42.631 "enable_recv_pipe": true, 00:22:42.631 "enable_quickack": false, 00:22:42.631 "enable_placement_id": 0, 00:22:42.631 "enable_zerocopy_send_server": true, 00:22:42.631 "enable_zerocopy_send_client": false, 00:22:42.631 "zerocopy_threshold": 0, 00:22:42.631 "tls_version": 0, 00:22:42.631 "enable_ktls": false 00:22:42.631 } 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "method": "sock_impl_set_options", 00:22:42.631 "params": { 00:22:42.631 "impl_name": "posix", 00:22:42.631 "recv_buf_size": 2097152, 00:22:42.631 "send_buf_size": 2097152, 00:22:42.631 "enable_recv_pipe": true, 00:22:42.631 "enable_quickack": false, 00:22:42.631 "enable_placement_id": 0, 00:22:42.631 "enable_zerocopy_send_server": true, 00:22:42.631 "enable_zerocopy_send_client": false, 00:22:42.631 "zerocopy_threshold": 0, 00:22:42.631 "tls_version": 0, 00:22:42.631 "enable_ktls": false 00:22:42.631 } 00:22:42.631 } 00:22:42.631 ] 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "subsystem": "vmd", 00:22:42.631 "config": [] 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "subsystem": "accel", 00:22:42.631 "config": [ 00:22:42.631 { 00:22:42.631 "method": "accel_set_options", 00:22:42.631 "params": { 00:22:42.631 "small_cache_size": 128, 00:22:42.631 "large_cache_size": 16, 00:22:42.631 "task_count": 2048, 00:22:42.631 "sequence_count": 2048, 00:22:42.631 "buf_count": 2048 00:22:42.631 } 00:22:42.631 } 00:22:42.631 ] 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "subsystem": "bdev", 00:22:42.631 "config": [ 00:22:42.631 { 00:22:42.631 "method": "bdev_set_options", 00:22:42.631 "params": { 00:22:42.631 "bdev_io_pool_size": 65535, 00:22:42.631 "bdev_io_cache_size": 256, 00:22:42.631 "bdev_auto_examine": true, 00:22:42.631 "iobuf_small_cache_size": 128, 00:22:42.631 "iobuf_large_cache_size": 16 00:22:42.631 } 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "method": "bdev_raid_set_options", 00:22:42.631 "params": { 00:22:42.631 "process_window_size_kb": 1024 00:22:42.631 } 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "method": "bdev_iscsi_set_options", 00:22:42.631 "params": { 00:22:42.631 "timeout_sec": 30 00:22:42.631 } 00:22:42.631 }, 00:22:42.631 { 00:22:42.631 "method": "bdev_nvme_set_options", 00:22:42.631 "params": { 00:22:42.631 "action_on_timeout": "none", 00:22:42.631 "timeout_us": 0, 00:22:42.631 "timeout_admin_us": 0, 00:22:42.631 "keep_alive_timeout_ms": 10000, 00:22:42.631 "arbitration_burst": 0, 00:22:42.631 "low_priority_weight": 0, 00:22:42.631 "medium_priority_weight": 0, 00:22:42.631 "high_priority_weight": 0, 00:22:42.631 "nvme_adminq_poll_period_us": 10000, 00:22:42.631 "nvme_ioq_poll_period_us": 0, 00:22:42.631 "io_queue_requests": 0, 00:22:42.631 "delay_cmd_submit": true, 00:22:42.631 "transport_retry_count": 4, 00:22:42.631 "bdev_retry_count": 3, 00:22:42.631 "transport_ack_timeout": 0, 00:22:42.631 "ctrlr_loss_timeout_sec": 0, 00:22:42.631 "reconnect_delay_sec": 0, 00:22:42.631 "fast_io_fail_timeout_sec": 0, 00:22:42.631 "disable_auto_failback": false, 00:22:42.631 "generate_uuids": false, 00:22:42.631 "transport_tos": 0, 00:22:42.631 "nvme_error_stat": false, 00:22:42.631 "rdma_srq_size": 0, 00:22:42.631 "io_path_stat": false, 00:22:42.631 "allow_accel_sequence": false, 00:22:42.631 "rdma_max_cq_size": 0, 00:22:42.631 "rdma_cm_event_timeout_ms": 0, 00:22:42.631 "dhchap_digests": [ 00:22:42.631 "sha256", 00:22:42.631 "sha384", 00:22:42.631 "sha512" 00:22:42.631 ], 00:22:42.631 "dhchap_dhgroups": [ 00:22:42.631 "null", 00:22:42.631 "ffdhe2048", 00:22:42.631 "ffdhe3072", 00:22:42.632 "ffdhe4096", 00:22:42.632 "ffdhe6144", 00:22:42.632 "ffdhe8192" 00:22:42.632 ] 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "bdev_nvme_set_hotplug", 00:22:42.632 "params": { 00:22:42.632 "period_us": 100000, 00:22:42.632 "enable": false 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "bdev_malloc_create", 00:22:42.632 "params": { 00:22:42.632 "name": "malloc0", 00:22:42.632 "num_blocks": 8192, 00:22:42.632 "block_size": 4096, 00:22:42.632 "physical_block_size": 4096, 00:22:42.632 "uuid": "ba945694-342a-47b6-9858-2d69dca414a2", 00:22:42.632 "optimal_io_boundary": 0 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "bdev_wait_for_examine" 00:22:42.632 } 00:22:42.632 ] 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "subsystem": "nbd", 00:22:42.632 "config": [] 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "subsystem": "scheduler", 00:22:42.632 "config": [ 00:22:42.632 { 00:22:42.632 "method": "framework_set_scheduler", 00:22:42.632 "params": { 00:22:42.632 "name": "static" 00:22:42.632 } 00:22:42.632 } 00:22:42.632 ] 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "subsystem": "nvmf", 00:22:42.632 "config": [ 00:22:42.632 { 00:22:42.632 "method": "nvmf_set_config", 00:22:42.632 "params": { 00:22:42.632 "discovery_filter": "match_any", 00:22:42.632 "admin_cmd_passthru": { 00:22:42.632 "identify_ctrlr": false 00:22:42.632 } 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_set_max_subsystems", 00:22:42.632 "params": { 00:22:42.632 "max_subsystems": 1024 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_set_crdt", 00:22:42.632 "params": { 00:22:42.632 "crdt1": 0, 00:22:42.632 "crdt2": 0, 00:22:42.632 "crdt3": 0 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_create_transport", 00:22:42.632 "params": { 00:22:42.632 "trtype": "TCP", 00:22:42.632 "max_queue_depth": 128, 00:22:42.632 "max_io_qpairs_per_ctrlr": 127, 00:22:42.632 "in_capsule_data_size": 4096, 00:22:42.632 "max_io_size": 131072, 00:22:42.632 "io_unit_size": 131072, 00:22:42.632 "max_aq_depth": 128, 00:22:42.632 "num_shared_buffers": 511, 00:22:42.632 "buf_cache_size": 4294967295, 00:22:42.632 "dif_insert_or_strip": false, 00:22:42.632 "zcopy": false, 00:22:42.632 "c2h_success": false, 00:22:42.632 "sock_priority": 0, 00:22:42.632 "abort_timeout_sec": 1, 00:22:42.632 "ack_timeout": 0, 00:22:42.632 "data_wr_pool_size": 0 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_create_subsystem", 00:22:42.632 "params": { 00:22:42.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.632 "allow_any_host": false, 00:22:42.632 "serial_number": "00000000000000000000", 00:22:42.632 "model_number": "SPDK bdev Controller", 00:22:42.632 "max_namespaces": 32, 00:22:42.632 "min_cntlid": 1, 00:22:42.632 "max_cntlid": 65519, 00:22:42.632 "ana_reporting": false 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_subsystem_add_host", 00:22:42.632 "params": { 00:22:42.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.632 "host": "nqn.2016-06.io.spdk:host1", 00:22:42.632 "psk": "key0" 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_subsystem_add_ns", 00:22:42.632 "params": { 00:22:42.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.632 "namespace": { 00:22:42.632 "nsid": 1, 00:22:42.632 "bdev_name": "malloc0", 00:22:42.632 "nguid": "BA945694342A47B698582D69DCA414A2", 00:22:42.632 "uuid": "ba945694-342a-47b6-9858-2d69dca414a2", 00:22:42.632 "no_auto_visible": false 00:22:42.632 } 00:22:42.632 } 00:22:42.632 }, 00:22:42.632 { 00:22:42.632 "method": "nvmf_subsystem_add_listener", 00:22:42.632 "params": { 00:22:42.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.632 "listen_address": { 00:22:42.632 "trtype": "TCP", 00:22:42.632 "adrfam": "IPv4", 00:22:42.632 "traddr": "10.0.0.2", 00:22:42.632 "trsvcid": "4420" 00:22:42.632 }, 00:22:42.632 "secure_channel": true 00:22:42.632 } 00:22:42.632 } 00:22:42.632 ] 00:22:42.632 } 00:22:42.632 ] 00:22:42.632 }' 00:22:42.632 11:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:42.892 11:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:42.892 "subsystems": [ 00:22:42.892 { 00:22:42.892 "subsystem": "keyring", 00:22:42.892 "config": [ 00:22:42.892 { 00:22:42.892 "method": "keyring_file_add_key", 00:22:42.892 "params": { 00:22:42.892 "name": "key0", 00:22:42.892 "path": "/tmp/tmp.xUfSP8AAwf" 00:22:42.892 } 00:22:42.892 } 00:22:42.892 ] 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "subsystem": "iobuf", 00:22:42.892 "config": [ 00:22:42.892 { 00:22:42.892 "method": "iobuf_set_options", 00:22:42.892 "params": { 00:22:42.892 "small_pool_count": 8192, 00:22:42.892 "large_pool_count": 1024, 00:22:42.892 "small_bufsize": 8192, 00:22:42.892 "large_bufsize": 135168 00:22:42.892 } 00:22:42.892 } 00:22:42.892 ] 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "subsystem": "sock", 00:22:42.892 "config": [ 00:22:42.892 { 00:22:42.892 "method": "sock_set_default_impl", 00:22:42.892 "params": { 00:22:42.892 "impl_name": "posix" 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "sock_impl_set_options", 00:22:42.892 "params": { 00:22:42.892 "impl_name": "ssl", 00:22:42.892 "recv_buf_size": 4096, 00:22:42.892 "send_buf_size": 4096, 00:22:42.892 "enable_recv_pipe": true, 00:22:42.892 "enable_quickack": false, 00:22:42.892 "enable_placement_id": 0, 00:22:42.892 "enable_zerocopy_send_server": true, 00:22:42.892 "enable_zerocopy_send_client": false, 00:22:42.892 "zerocopy_threshold": 0, 00:22:42.892 "tls_version": 0, 00:22:42.892 "enable_ktls": false 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "sock_impl_set_options", 00:22:42.892 "params": { 00:22:42.892 "impl_name": "posix", 00:22:42.892 "recv_buf_size": 2097152, 00:22:42.892 "send_buf_size": 2097152, 00:22:42.892 "enable_recv_pipe": true, 00:22:42.892 "enable_quickack": false, 00:22:42.892 "enable_placement_id": 0, 00:22:42.892 "enable_zerocopy_send_server": true, 00:22:42.892 "enable_zerocopy_send_client": false, 00:22:42.892 "zerocopy_threshold": 0, 00:22:42.892 "tls_version": 0, 00:22:42.892 "enable_ktls": false 00:22:42.892 } 00:22:42.892 } 00:22:42.892 ] 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "subsystem": "vmd", 00:22:42.892 "config": [] 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "subsystem": "accel", 00:22:42.892 "config": [ 00:22:42.892 { 00:22:42.892 "method": "accel_set_options", 00:22:42.892 "params": { 00:22:42.892 "small_cache_size": 128, 00:22:42.892 "large_cache_size": 16, 00:22:42.892 "task_count": 2048, 00:22:42.892 "sequence_count": 2048, 00:22:42.892 "buf_count": 2048 00:22:42.892 } 00:22:42.892 } 00:22:42.892 ] 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "subsystem": "bdev", 00:22:42.892 "config": [ 00:22:42.892 { 00:22:42.892 "method": "bdev_set_options", 00:22:42.892 "params": { 00:22:42.892 "bdev_io_pool_size": 65535, 00:22:42.892 "bdev_io_cache_size": 256, 00:22:42.892 "bdev_auto_examine": true, 00:22:42.892 "iobuf_small_cache_size": 128, 00:22:42.892 "iobuf_large_cache_size": 16 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "bdev_raid_set_options", 00:22:42.892 "params": { 00:22:42.892 "process_window_size_kb": 1024 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "bdev_iscsi_set_options", 00:22:42.892 "params": { 00:22:42.892 "timeout_sec": 30 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "bdev_nvme_set_options", 00:22:42.892 "params": { 00:22:42.892 "action_on_timeout": "none", 00:22:42.892 "timeout_us": 0, 00:22:42.892 "timeout_admin_us": 0, 00:22:42.892 "keep_alive_timeout_ms": 10000, 00:22:42.892 "arbitration_burst": 0, 00:22:42.892 "low_priority_weight": 0, 00:22:42.892 "medium_priority_weight": 0, 00:22:42.892 "high_priority_weight": 0, 00:22:42.892 "nvme_adminq_poll_period_us": 10000, 00:22:42.892 "nvme_ioq_poll_period_us": 0, 00:22:42.892 "io_queue_requests": 512, 00:22:42.892 "delay_cmd_submit": true, 00:22:42.892 "transport_retry_count": 4, 00:22:42.892 "bdev_retry_count": 3, 00:22:42.892 "transport_ack_timeout": 0, 00:22:42.892 "ctrlr_loss_timeout_sec": 0, 00:22:42.892 "reconnect_delay_sec": 0, 00:22:42.892 "fast_io_fail_timeout_sec": 0, 00:22:42.892 "disable_auto_failback": false, 00:22:42.892 "generate_uuids": false, 00:22:42.892 "transport_tos": 0, 00:22:42.892 "nvme_error_stat": false, 00:22:42.892 "rdma_srq_size": 0, 00:22:42.892 "io_path_stat": false, 00:22:42.892 "allow_accel_sequence": false, 00:22:42.892 "rdma_max_cq_size": 0, 00:22:42.892 "rdma_cm_event_timeout_ms": 0, 00:22:42.892 "dhchap_digests": [ 00:22:42.892 "sha256", 00:22:42.892 "sha384", 00:22:42.892 "sha512" 00:22:42.892 ], 00:22:42.892 "dhchap_dhgroups": [ 00:22:42.892 "null", 00:22:42.892 "ffdhe2048", 00:22:42.892 "ffdhe3072", 00:22:42.892 "ffdhe4096", 00:22:42.892 "ffdhe6144", 00:22:42.892 "ffdhe8192" 00:22:42.892 ] 00:22:42.892 } 00:22:42.892 }, 00:22:42.892 { 00:22:42.892 "method": "bdev_nvme_attach_controller", 00:22:42.892 "params": { 00:22:42.893 "name": "nvme0", 00:22:42.893 "trtype": "TCP", 00:22:42.893 "adrfam": "IPv4", 00:22:42.893 "traddr": "10.0.0.2", 00:22:42.893 "trsvcid": "4420", 00:22:42.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.893 "prchk_reftag": false, 00:22:42.893 "prchk_guard": false, 00:22:42.893 "ctrlr_loss_timeout_sec": 0, 00:22:42.893 "reconnect_delay_sec": 0, 00:22:42.893 "fast_io_fail_timeout_sec": 0, 00:22:42.893 "psk": "key0", 00:22:42.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.893 "hdgst": false, 00:22:42.893 "ddgst": false 00:22:42.893 } 00:22:42.893 }, 00:22:42.893 { 00:22:42.893 "method": "bdev_nvme_set_hotplug", 00:22:42.893 "params": { 00:22:42.893 "period_us": 100000, 00:22:42.893 "enable": false 00:22:42.893 } 00:22:42.893 }, 00:22:42.893 { 00:22:42.893 "method": "bdev_enable_histogram", 00:22:42.893 "params": { 00:22:42.893 "name": "nvme0n1", 00:22:42.893 "enable": true 00:22:42.893 } 00:22:42.893 }, 00:22:42.893 { 00:22:42.893 "method": "bdev_wait_for_examine" 00:22:42.893 } 00:22:42.893 ] 00:22:42.893 }, 00:22:42.893 { 00:22:42.893 "subsystem": "nbd", 00:22:42.893 "config": [] 00:22:42.893 } 00:22:42.893 ] 00:22:42.893 }' 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1590330 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1590330 ']' 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1590330 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590330 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590330' 00:22:42.893 killing process with pid 1590330 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1590330 00:22:42.893 Received shutdown signal, test time was about 1.000000 seconds 00:22:42.893 00:22:42.893 Latency(us) 00:22:42.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.893 =================================================================================================================== 00:22:42.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.893 11:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1590330 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1590248 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1590248 ']' 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1590248 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:42.893 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590248 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590248' 00:22:43.153 killing process with pid 1590248 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1590248 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1590248 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.153 11:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:43.153 "subsystems": [ 00:22:43.153 { 00:22:43.153 "subsystem": "keyring", 00:22:43.153 "config": [ 00:22:43.153 { 00:22:43.153 "method": "keyring_file_add_key", 00:22:43.153 "params": { 00:22:43.153 "name": "key0", 00:22:43.153 "path": "/tmp/tmp.xUfSP8AAwf" 00:22:43.153 } 00:22:43.153 } 00:22:43.153 ] 00:22:43.153 }, 00:22:43.153 { 00:22:43.153 "subsystem": "iobuf", 00:22:43.153 "config": [ 00:22:43.153 { 00:22:43.153 "method": "iobuf_set_options", 00:22:43.153 "params": { 00:22:43.153 "small_pool_count": 8192, 00:22:43.153 "large_pool_count": 1024, 00:22:43.153 "small_bufsize": 8192, 00:22:43.153 "large_bufsize": 135168 00:22:43.153 } 00:22:43.153 } 00:22:43.153 ] 00:22:43.153 }, 00:22:43.153 { 00:22:43.153 "subsystem": "sock", 00:22:43.153 "config": [ 00:22:43.153 { 00:22:43.153 "method": "sock_set_default_impl", 00:22:43.153 "params": { 00:22:43.153 "impl_name": "posix" 00:22:43.153 } 00:22:43.153 }, 00:22:43.153 { 00:22:43.153 "method": "sock_impl_set_options", 00:22:43.153 "params": { 00:22:43.153 "impl_name": "ssl", 00:22:43.153 "recv_buf_size": 4096, 00:22:43.153 "send_buf_size": 4096, 00:22:43.153 "enable_recv_pipe": true, 00:22:43.153 "enable_quickack": false, 00:22:43.153 "enable_placement_id": 0, 00:22:43.153 "enable_zerocopy_send_server": true, 00:22:43.153 "enable_zerocopy_send_client": false, 00:22:43.153 "zerocopy_threshold": 0, 00:22:43.153 "tls_version": 0, 00:22:43.153 "enable_ktls": false 00:22:43.153 } 00:22:43.153 }, 00:22:43.153 { 00:22:43.153 "method": "sock_impl_set_options", 00:22:43.154 "params": { 00:22:43.154 "impl_name": "posix", 00:22:43.154 "recv_buf_size": 2097152, 00:22:43.154 "send_buf_size": 2097152, 00:22:43.154 "enable_recv_pipe": true, 00:22:43.154 "enable_quickack": false, 00:22:43.154 "enable_placement_id": 0, 00:22:43.154 "enable_zerocopy_send_server": true, 00:22:43.154 "enable_zerocopy_send_client": false, 00:22:43.154 "zerocopy_threshold": 0, 00:22:43.154 "tls_version": 0, 00:22:43.154 "enable_ktls": false 00:22:43.154 } 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "vmd", 00:22:43.154 "config": [] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "accel", 00:22:43.154 "config": [ 00:22:43.154 { 00:22:43.154 "method": "accel_set_options", 00:22:43.154 "params": { 00:22:43.154 "small_cache_size": 128, 00:22:43.154 "large_cache_size": 16, 00:22:43.154 "task_count": 2048, 00:22:43.154 "sequence_count": 2048, 00:22:43.154 "buf_count": 2048 00:22:43.154 } 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "bdev", 00:22:43.154 "config": [ 00:22:43.154 { 00:22:43.154 "method": "bdev_set_options", 00:22:43.154 "params": { 00:22:43.154 "bdev_io_pool_size": 65535, 00:22:43.154 "bdev_io_cache_size": 256, 00:22:43.154 "bdev_auto_examine": true, 00:22:43.154 "iobuf_small_cache_size": 128, 00:22:43.154 "iobuf_large_cache_size": 16 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_raid_set_options", 00:22:43.154 "params": { 00:22:43.154 "process_window_size_kb": 1024 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_iscsi_set_options", 00:22:43.154 "params": { 00:22:43.154 "timeout_sec": 30 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_nvme_set_options", 00:22:43.154 "params": { 00:22:43.154 "action_on_timeout": "none", 00:22:43.154 "timeout_us": 0, 00:22:43.154 "timeout_admin_us": 0, 00:22:43.154 "keep_alive_timeout_ms": 10000, 00:22:43.154 "arbitration_burst": 0, 00:22:43.154 "low_priority_weight": 0, 00:22:43.154 "medium_priority_weight": 0, 00:22:43.154 "high_priority_weight": 0, 00:22:43.154 "nvme_adminq_poll_period_us": 10000, 00:22:43.154 "nvme_ioq_poll_period_us": 0, 00:22:43.154 "io_queue_requests": 0, 00:22:43.154 "delay_cmd_submit": true, 00:22:43.154 "transport_retry_count": 4, 00:22:43.154 "bdev_retry_count": 3, 00:22:43.154 "transport_ack_timeout": 0, 00:22:43.154 "ctrlr_loss_timeout_sec": 0, 00:22:43.154 "reconnect_delay_sec": 0, 00:22:43.154 "fast_io_fail_timeout_sec": 0, 00:22:43.154 "disable_auto_failback": false, 00:22:43.154 "generate_uuids": false, 00:22:43.154 "transport_tos": 0, 00:22:43.154 "nvme_error_stat": false, 00:22:43.154 "rdma_srq_size": 0, 00:22:43.154 "io_path_stat": false, 00:22:43.154 "allow_accel_sequence": false, 00:22:43.154 "rdma_max_cq_size": 0, 00:22:43.154 "rdma_cm_event_timeout_ms": 0, 00:22:43.154 "dhchap_digests": [ 00:22:43.154 "sha256", 00:22:43.154 "sha384", 00:22:43.154 "sha512" 00:22:43.154 ], 00:22:43.154 "dhchap_dhgroups": [ 00:22:43.154 "null", 00:22:43.154 "ffdhe2048", 00:22:43.154 "ffdhe3072", 00:22:43.154 "ffdhe4096", 00:22:43.154 "ffdhe6144", 00:22:43.154 "ffdhe8192" 00:22:43.154 ] 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_nvme_set_hotplug", 00:22:43.154 "params": { 00:22:43.154 "period_us": 100000, 00:22:43.154 "enable": false 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_malloc_create", 00:22:43.154 "params": { 00:22:43.154 "name": "malloc0", 00:22:43.154 "num_blocks": 8192, 00:22:43.154 "block_size": 4096, 00:22:43.154 "physical_block_size": 4096, 00:22:43.154 "uuid": "ba945694-342a-47b6-9858-2d69dca414a2", 00:22:43.154 "optimal_io_boundary": 0 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "bdev_wait_for_examine" 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "nbd", 00:22:43.154 "config": [] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "scheduler", 00:22:43.154 "config": [ 00:22:43.154 { 00:22:43.154 "method": "framework_set_scheduler", 00:22:43.154 "params": { 00:22:43.154 "name": "static" 00:22:43.154 } 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "subsystem": "nvmf", 00:22:43.154 "config": [ 00:22:43.154 { 00:22:43.154 "method": "nvmf_set_config", 00:22:43.154 "params": { 00:22:43.154 "discovery_filter": "match_any", 00:22:43.154 "admin_cmd_passthru": { 00:22:43.154 "identify_ctrlr": false 00:22:43.154 } 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_set_max_subsystems", 00:22:43.154 "params": { 00:22:43.154 "max_subsystems": 1024 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_set_crdt", 00:22:43.154 "params": { 00:22:43.154 "crdt1": 0, 00:22:43.154 "crdt2": 0, 00:22:43.154 "crdt3": 0 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_create_transport", 00:22:43.154 "params": { 00:22:43.154 "trtype": "TCP", 00:22:43.154 "max_queue_depth": 128, 00:22:43.154 "max_io_qpairs_per_ctrlr": 127, 00:22:43.154 "in_capsule_data_size": 4096, 00:22:43.154 "max_io_size": 131072, 00:22:43.154 "io_unit_size": 131072, 00:22:43.154 "max_aq_depth": 128, 00:22:43.154 "num_shared_buffers": 511, 00:22:43.154 "buf_cache_size": 4294967295, 00:22:43.154 "dif_insert_or_strip": false, 00:22:43.154 "zcopy": false, 00:22:43.154 "c2h_success": false, 00:22:43.154 "sock_priority": 0, 00:22:43.154 "abort_timeout_sec": 1, 00:22:43.154 "ack_timeout": 0, 00:22:43.154 "data_wr_pool_size": 0 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_create_subsystem", 00:22:43.154 "params": { 00:22:43.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.154 "allow_any_host": false, 00:22:43.154 "serial_number": "00000000000000000000", 00:22:43.154 "model_number": "SPDK bdev Controller", 00:22:43.154 "max_namespaces": 32, 00:22:43.154 "min_cntlid": 1, 00:22:43.154 "max_cntlid": 65519, 00:22:43.154 "ana_reporting": false 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_subsystem_add_host", 00:22:43.154 "params": { 00:22:43.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.154 "host": "nqn.2016-06.io.spdk:host1", 00:22:43.154 "psk": "key0" 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_subsystem_add_ns", 00:22:43.154 "params": { 00:22:43.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.154 "namespace": { 00:22:43.154 "nsid": 1, 00:22:43.154 "bdev_name": "malloc0", 00:22:43.154 "nguid": "BA945694342A47B698582D69DCA414A2", 00:22:43.154 "uuid": "ba945694-342a-47b6-9858-2d69dca414a2", 00:22:43.154 "no_auto_visible": false 00:22:43.154 } 00:22:43.154 } 00:22:43.154 }, 00:22:43.154 { 00:22:43.154 "method": "nvmf_subsystem_add_listener", 00:22:43.154 "params": { 00:22:43.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.154 "listen_address": { 00:22:43.154 "trtype": "TCP", 00:22:43.154 "adrfam": "IPv4", 00:22:43.154 "traddr": "10.0.0.2", 00:22:43.154 "trsvcid": "4420" 00:22:43.154 }, 00:22:43.154 "secure_channel": true 00:22:43.154 } 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 } 00:22:43.154 ] 00:22:43.154 }' 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1590877 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1590877 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1590877 ']' 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:43.154 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.155 [2024-06-10 11:29:40.351125] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:43.155 [2024-06-10 11:29:40.351180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.416 [2024-06-10 11:29:40.436360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.416 [2024-06-10 11:29:40.498594] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.416 [2024-06-10 11:29:40.498628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.416 [2024-06-10 11:29:40.498635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.416 [2024-06-10 11:29:40.498641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.416 [2024-06-10 11:29:40.498646] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.416 [2024-06-10 11:29:40.498694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.677 [2024-06-10 11:29:40.693070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.677 [2024-06-10 11:29:40.725073] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:43.677 [2024-06-10 11:29:40.744033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1590978 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1590978 /var/tmp/bdevperf.sock 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1590978 ']' 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.249 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:44.249 "subsystems": [ 00:22:44.249 { 00:22:44.249 "subsystem": "keyring", 00:22:44.249 "config": [ 00:22:44.249 { 00:22:44.249 "method": "keyring_file_add_key", 00:22:44.249 "params": { 00:22:44.249 "name": "key0", 00:22:44.249 "path": "/tmp/tmp.xUfSP8AAwf" 00:22:44.249 } 00:22:44.249 } 00:22:44.249 ] 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "subsystem": "iobuf", 00:22:44.249 "config": [ 00:22:44.249 { 00:22:44.249 "method": "iobuf_set_options", 00:22:44.249 "params": { 00:22:44.249 "small_pool_count": 8192, 00:22:44.249 "large_pool_count": 1024, 00:22:44.249 "small_bufsize": 8192, 00:22:44.249 "large_bufsize": 135168 00:22:44.249 } 00:22:44.249 } 00:22:44.249 ] 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "subsystem": "sock", 00:22:44.249 "config": [ 00:22:44.249 { 00:22:44.249 "method": "sock_set_default_impl", 00:22:44.249 "params": { 00:22:44.249 "impl_name": "posix" 00:22:44.249 } 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "method": "sock_impl_set_options", 00:22:44.249 "params": { 00:22:44.249 "impl_name": "ssl", 00:22:44.249 "recv_buf_size": 4096, 00:22:44.249 "send_buf_size": 4096, 00:22:44.249 "enable_recv_pipe": true, 00:22:44.249 "enable_quickack": false, 00:22:44.249 "enable_placement_id": 0, 00:22:44.249 "enable_zerocopy_send_server": true, 00:22:44.249 "enable_zerocopy_send_client": false, 00:22:44.249 "zerocopy_threshold": 0, 00:22:44.249 "tls_version": 0, 00:22:44.249 "enable_ktls": false 00:22:44.249 } 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "method": "sock_impl_set_options", 00:22:44.249 "params": { 00:22:44.249 "impl_name": "posix", 00:22:44.249 "recv_buf_size": 2097152, 00:22:44.249 "send_buf_size": 2097152, 00:22:44.249 "enable_recv_pipe": true, 00:22:44.249 "enable_quickack": false, 00:22:44.249 "enable_placement_id": 0, 00:22:44.249 "enable_zerocopy_send_server": true, 00:22:44.249 "enable_zerocopy_send_client": false, 00:22:44.249 "zerocopy_threshold": 0, 00:22:44.249 "tls_version": 0, 00:22:44.249 "enable_ktls": false 00:22:44.249 } 00:22:44.249 } 00:22:44.249 ] 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "subsystem": "vmd", 00:22:44.249 "config": [] 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "subsystem": "accel", 00:22:44.249 "config": [ 00:22:44.249 { 00:22:44.249 "method": "accel_set_options", 00:22:44.249 "params": { 00:22:44.249 "small_cache_size": 128, 00:22:44.249 "large_cache_size": 16, 00:22:44.249 "task_count": 2048, 00:22:44.249 "sequence_count": 2048, 00:22:44.249 "buf_count": 2048 00:22:44.249 } 00:22:44.249 } 00:22:44.249 ] 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "subsystem": "bdev", 00:22:44.249 "config": [ 00:22:44.249 { 00:22:44.249 "method": "bdev_set_options", 00:22:44.249 "params": { 00:22:44.249 "bdev_io_pool_size": 65535, 00:22:44.249 "bdev_io_cache_size": 256, 00:22:44.249 "bdev_auto_examine": true, 00:22:44.249 "iobuf_small_cache_size": 128, 00:22:44.249 "iobuf_large_cache_size": 16 00:22:44.249 } 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "method": "bdev_raid_set_options", 00:22:44.249 "params": { 00:22:44.249 "process_window_size_kb": 1024 00:22:44.249 } 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "method": "bdev_iscsi_set_options", 00:22:44.249 "params": { 00:22:44.249 "timeout_sec": 30 00:22:44.249 } 00:22:44.249 }, 00:22:44.249 { 00:22:44.249 "method": "bdev_nvme_set_options", 00:22:44.249 "params": { 00:22:44.249 "action_on_timeout": "none", 00:22:44.249 "timeout_us": 0, 00:22:44.249 "timeout_admin_us": 0, 00:22:44.249 "keep_alive_timeout_ms": 10000, 00:22:44.249 "arbitration_burst": 0, 00:22:44.249 "low_priority_weight": 0, 00:22:44.249 "medium_priority_weight": 0, 00:22:44.249 "high_priority_weight": 0, 00:22:44.250 "nvme_adminq_poll_period_us": 10000, 00:22:44.250 "nvme_ioq_poll_period_us": 0, 00:22:44.250 "io_queue_requests": 512, 00:22:44.250 "delay_cmd_submit": true, 00:22:44.250 "transport_retry_count": 4, 00:22:44.250 "bdev_retry_count": 3, 00:22:44.250 "transport_ack_timeout": 0, 00:22:44.250 "ctrlr_loss_timeout_sec": 0, 00:22:44.250 "reconnect_delay_sec": 0, 00:22:44.250 "fast_io_fail_timeout_sec": 0, 00:22:44.250 "disable_auto_failback": false, 00:22:44.250 "generate_uuids": false, 00:22:44.250 "transport_tos": 0, 00:22:44.250 "nvme_error_stat": false, 00:22:44.250 "rdma_srq_size": 0, 00:22:44.250 "io_path_stat": false, 00:22:44.250 "allow_accel_sequence": false, 00:22:44.250 "rdma_max_cq_size": 0, 00:22:44.250 "rdma_cm_event_timeout_ms": 0, 00:22:44.250 "dhchap_digests": [ 00:22:44.250 "sha256", 00:22:44.250 "sha384", 00:22:44.250 "sha512" 00:22:44.250 ], 00:22:44.250 "dhchap_dhgroups": [ 00:22:44.250 "null", 00:22:44.250 "ffdhe2048", 00:22:44.250 "ffdhe3072", 00:22:44.250 "ffdhe4096", 00:22:44.250 "ffdhe6144", 00:22:44.250 "ffdhe8192" 00:22:44.250 ] 00:22:44.250 } 00:22:44.250 }, 00:22:44.250 { 00:22:44.250 "method": "bdev_nvme_attach_controller", 00:22:44.250 "params": { 00:22:44.250 "name": "nvme0", 00:22:44.250 "trtype": "TCP", 00:22:44.250 "adrfam": "IPv4", 00:22:44.250 "traddr": "10.0.0.2", 00:22:44.250 "trsvcid": "4420", 00:22:44.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.250 "prchk_reftag": false, 00:22:44.250 "prchk_guard": false, 00:22:44.250 "ctrlr_loss_timeout_sec": 0, 00:22:44.250 "reconnect_delay_sec": 0, 00:22:44.250 "fast_io_fail_timeout_sec": 0, 00:22:44.250 "psk": "key0", 00:22:44.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.250 "hdgst": false, 00:22:44.250 "ddgst": false 00:22:44.250 } 00:22:44.250 }, 00:22:44.250 { 00:22:44.250 "method": "bdev_nvme_set_hotplug", 00:22:44.250 "params": { 00:22:44.250 "period_us": 100000, 00:22:44.250 "enable": false 00:22:44.250 } 00:22:44.250 }, 00:22:44.250 { 00:22:44.250 "method": "bdev_enable_histogram", 00:22:44.250 "params": { 00:22:44.250 "name": "nvme0n1", 00:22:44.250 "enable": true 00:22:44.250 } 00:22:44.250 }, 00:22:44.250 { 00:22:44.250 "method": "bdev_wait_for_examine" 00:22:44.250 } 00:22:44.250 ] 00:22:44.250 }, 00:22:44.250 { 00:22:44.250 "subsystem": "nbd", 00:22:44.250 "config": [] 00:22:44.250 } 00:22:44.250 ] 00:22:44.250 }' 00:22:44.250 [2024-06-10 11:29:41.284843] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:44.250 [2024-06-10 11:29:41.284909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590978 ] 00:22:44.250 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.250 [2024-06-10 11:29:41.346554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.250 [2024-06-10 11:29:41.407720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.511 [2024-06-10 11:29:41.544031] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.083 11:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:45.083 11:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:45.083 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.083 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:45.345 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.345 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.345 Running I/O for 1 seconds... 00:22:46.285 00:22:46.285 Latency(us) 00:22:46.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.285 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:46.285 Verification LBA range: start 0x0 length 0x2000 00:22:46.285 nvme0n1 : 1.02 4314.42 16.85 0.00 0.00 29386.44 8217.21 71383.83 00:22:46.285 =================================================================================================================== 00:22:46.285 Total : 4314.42 16.85 0.00 0.00 29386.44 8217.21 71383.83 00:22:46.285 0 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:22:46.285 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:46.286 nvmf_trace.0 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1590978 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1590978 ']' 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1590978 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590978 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590978' 00:22:46.546 killing process with pid 1590978 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1590978 00:22:46.546 Received shutdown signal, test time was about 1.000000 seconds 00:22:46.546 00:22:46.546 Latency(us) 00:22:46.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.546 =================================================================================================================== 00:22:46.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1590978 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.546 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.546 rmmod nvme_tcp 00:22:46.546 rmmod nvme_fabrics 00:22:46.546 rmmod nvme_keyring 00:22:46.806 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.806 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:46.806 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:46.806 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1590877 ']' 00:22:46.806 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1590877 ']' 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590877' 00:22:46.807 killing process with pid 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1590877 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.807 11:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.354 11:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.354 11:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UY1I2qiue9 /tmp/tmp.j6QnPz1skZ /tmp/tmp.xUfSP8AAwf 00:22:49.354 00:22:49.354 real 1m21.478s 00:22:49.354 user 2m2.785s 00:22:49.354 sys 0m28.312s 00:22:49.354 11:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:49.354 11:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.354 ************************************ 00:22:49.354 END TEST nvmf_tls 00:22:49.354 ************************************ 00:22:49.354 11:29:46 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:49.354 11:29:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:49.354 11:29:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:49.354 11:29:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.354 ************************************ 00:22:49.354 START TEST nvmf_fips 00:22:49.354 ************************************ 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:49.354 * Looking for test storage... 00:22:49.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.354 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:49.355 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:22:49.356 Error setting digest 00:22:49.356 00A21F15677F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:49.356 00A21F15677F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.356 11:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.493 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.493 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:22:57.494 00:22:57.494 --- 10.0.0.2 ping statistics --- 00:22:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.494 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:22:57.494 00:22:57.494 --- 10.0.0.1 ping statistics --- 00:22:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.494 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1595830 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1595830 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1595830 ']' 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:57.494 11:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:57.754 [2024-06-10 11:29:54.771905] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:57.754 [2024-06-10 11:29:54.771972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.754 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.755 [2024-06-10 11:29:54.849002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.755 [2024-06-10 11:29:54.919403] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.755 [2024-06-10 11:29:54.919443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.755 [2024-06-10 11:29:54.919451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.755 [2024-06-10 11:29:54.919457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.755 [2024-06-10 11:29:54.919462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.755 [2024-06-10 11:29:54.919482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:58.694 [2024-06-10 11:29:55.813581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.694 [2024-06-10 11:29:55.829585] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.694 [2024-06-10 11:29:55.829755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.694 [2024-06-10 11:29:55.855912] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:58.694 malloc0 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1596132 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1596132 /var/tmp/bdevperf.sock 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1596132 ']' 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:58.694 11:29:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:58.955 [2024-06-10 11:29:55.936702] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:22:58.955 [2024-06-10 11:29:55.936758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596132 ] 00:22:58.955 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.955 [2024-06-10 11:29:55.991441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.955 [2024-06-10 11:29:56.044348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.524 11:29:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:59.524 11:29:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:22:59.524 11:29:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:59.783 [2024-06-10 11:29:56.908828] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.783 [2024-06-10 11:29:56.908907] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.783 TLSTESTn1 00:23:00.043 11:29:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.043 Running I/O for 10 seconds... 00:23:10.035 00:23:10.035 Latency(us) 00:23:10.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.035 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.035 Verification LBA range: start 0x0 length 0x2000 00:23:10.035 TLSTESTn1 : 10.03 4144.02 16.19 0.00 0.00 30830.18 5620.97 71383.83 00:23:10.035 =================================================================================================================== 00:23:10.035 Total : 4144.02 16.19 0.00 0.00 30830.18 5620.97 71383.83 00:23:10.035 0 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:10.035 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:10.035 nvmf_trace.0 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1596132 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1596132 ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1596132 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1596132 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1596132' 00:23:10.307 killing process with pid 1596132 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1596132 00:23:10.307 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.307 00:23:10.307 Latency(us) 00:23:10.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.307 =================================================================================================================== 00:23:10.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.307 [2024-06-10 11:30:07.327095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1596132 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.307 rmmod nvme_tcp 00:23:10.307 rmmod nvme_fabrics 00:23:10.307 rmmod nvme_keyring 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1595830 ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1595830 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1595830 ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1595830 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:10.307 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1595830 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1595830' 00:23:10.639 killing process with pid 1595830 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1595830 00:23:10.639 [2024-06-10 11:30:07.560490] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1595830 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.639 11:30:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.551 11:30:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.551 11:30:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:12.551 00:23:12.551 real 0m23.650s 00:23:12.551 user 0m24.327s 00:23:12.551 sys 0m10.255s 00:23:12.551 11:30:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:12.551 11:30:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:12.551 ************************************ 00:23:12.551 END TEST nvmf_fips 00:23:12.551 ************************************ 00:23:12.812 11:30:09 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:12.812 11:30:09 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:23:12.812 11:30:09 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:23:12.812 11:30:09 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:23:12.812 11:30:09 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.812 11:30:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:20.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.956 11:30:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:20.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:20.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:20.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:20.957 11:30:17 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:20.957 11:30:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:20.957 11:30:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:20.957 11:30:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:20.957 ************************************ 00:23:20.957 START TEST nvmf_perf_adq 00:23:20.957 ************************************ 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:20.957 * Looking for test storage... 00:23:20.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:20.957 11:30:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.095 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:29.096 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:29.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:29.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:29.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:29.096 11:30:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:30.480 11:30:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:32.393 11:30:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:37.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:37.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:37.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:37.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.678 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:37.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:23:37.679 00:23:37.679 --- 10.0.0.2 ping statistics --- 00:23:37.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.679 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:23:37.679 00:23:37.679 --- 10.0.0.1 ping statistics --- 00:23:37.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.679 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1608610 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1608610 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1608610 ']' 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:37.679 11:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:37.679 [2024-06-10 11:30:34.798951] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:23:37.679 [2024-06-10 11:30:34.799011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.679 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.679 [2024-06-10 11:30:34.876888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.939 [2024-06-10 11:30:34.972710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.939 [2024-06-10 11:30:34.972776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.939 [2024-06-10 11:30:34.972784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.939 [2024-06-10 11:30:34.972791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.939 [2024-06-10 11:30:34.972797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.939 [2024-06-10 11:30:34.972934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.939 [2024-06-10 11:30:34.973189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.939 [2024-06-10 11:30:34.973351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.939 [2024-06-10 11:30:34.973354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.510 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.770 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 [2024-06-10 11:30:35.865726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 Malloc1 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 [2024-06-10 11:30:35.922277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1608714 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:38.771 11:30:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:38.771 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.318 11:30:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:41.318 11:30:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.318 11:30:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:41.319 11:30:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.319 11:30:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:41.319 "tick_rate": 2600000000, 00:23:41.319 "poll_groups": [ 00:23:41.319 { 00:23:41.319 "name": "nvmf_tgt_poll_group_000", 00:23:41.319 "admin_qpairs": 1, 00:23:41.319 "io_qpairs": 1, 00:23:41.319 "current_admin_qpairs": 1, 00:23:41.319 "current_io_qpairs": 1, 00:23:41.319 "pending_bdev_io": 0, 00:23:41.319 "completed_nvme_io": 22079, 00:23:41.319 "transports": [ 00:23:41.319 { 00:23:41.319 "trtype": "TCP" 00:23:41.319 } 00:23:41.319 ] 00:23:41.319 }, 00:23:41.319 { 00:23:41.319 "name": "nvmf_tgt_poll_group_001", 00:23:41.319 "admin_qpairs": 0, 00:23:41.319 "io_qpairs": 1, 00:23:41.319 "current_admin_qpairs": 0, 00:23:41.319 "current_io_qpairs": 1, 00:23:41.319 "pending_bdev_io": 0, 00:23:41.319 "completed_nvme_io": 27877, 00:23:41.319 "transports": [ 00:23:41.319 { 00:23:41.319 "trtype": "TCP" 00:23:41.319 } 00:23:41.319 ] 00:23:41.319 }, 00:23:41.319 { 00:23:41.319 "name": "nvmf_tgt_poll_group_002", 00:23:41.319 "admin_qpairs": 0, 00:23:41.319 "io_qpairs": 1, 00:23:41.319 "current_admin_qpairs": 0, 00:23:41.319 "current_io_qpairs": 1, 00:23:41.319 "pending_bdev_io": 0, 00:23:41.319 "completed_nvme_io": 22074, 00:23:41.319 "transports": [ 00:23:41.319 { 00:23:41.319 "trtype": "TCP" 00:23:41.319 } 00:23:41.319 ] 00:23:41.319 }, 00:23:41.319 { 00:23:41.319 "name": "nvmf_tgt_poll_group_003", 00:23:41.319 "admin_qpairs": 0, 00:23:41.319 "io_qpairs": 1, 00:23:41.319 "current_admin_qpairs": 0, 00:23:41.319 "current_io_qpairs": 1, 00:23:41.319 "pending_bdev_io": 0, 00:23:41.319 "completed_nvme_io": 22023, 00:23:41.319 "transports": [ 00:23:41.319 { 00:23:41.319 "trtype": "TCP" 00:23:41.319 } 00:23:41.319 ] 00:23:41.319 } 00:23:41.319 ] 00:23:41.319 }' 00:23:41.319 11:30:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:41.319 11:30:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:41.319 11:30:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:41.319 11:30:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:41.319 11:30:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1608714 00:23:49.461 Initializing NVMe Controllers 00:23:49.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:49.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:49.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:49.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:49.461 Initialization complete. Launching workers. 00:23:49.461 ======================================================== 00:23:49.461 Latency(us) 00:23:49.461 Device Information : IOPS MiB/s Average min max 00:23:49.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11660.58 45.55 5489.51 1538.60 8838.26 00:23:49.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14974.28 58.49 4273.46 1247.25 10330.36 00:23:49.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11717.58 45.77 5462.82 1513.92 11762.61 00:23:49.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11742.28 45.87 5450.89 1399.78 11800.17 00:23:49.461 ======================================================== 00:23:49.461 Total : 50094.72 195.68 5110.71 1247.25 11800.17 00:23:49.461 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.461 rmmod nvme_tcp 00:23:49.461 rmmod nvme_fabrics 00:23:49.461 rmmod nvme_keyring 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1608610 ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1608610 ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1608610' 00:23:49.461 killing process with pid 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1608610 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.461 11:30:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.376 11:30:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.376 11:30:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:51.376 11:30:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:52.763 11:30:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:55.391 11:30:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.677 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:00.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:00.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:00.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:00.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.678 11:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:24:00.678 00:24:00.678 --- 10.0.0.2 ping statistics --- 00:24:00.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.678 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:24:00.678 00:24:00.678 --- 10.0.0.1 ping statistics --- 00:24:00.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.678 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.678 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:00.679 net.core.busy_poll = 1 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:00.679 net.core.busy_read = 1 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1612706 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1612706 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1612706 ']' 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:00.679 11:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.679 [2024-06-10 11:30:57.602374] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:00.679 [2024-06-10 11:30:57.602442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.679 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.679 [2024-06-10 11:30:57.701455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.679 [2024-06-10 11:30:57.795019] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.679 [2024-06-10 11:30:57.795083] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.679 [2024-06-10 11:30:57.795090] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.679 [2024-06-10 11:30:57.795097] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.679 [2024-06-10 11:30:57.795103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.679 [2024-06-10 11:30:57.795231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.679 [2024-06-10 11:30:57.795354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.679 [2024-06-10 11:30:57.795513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.679 [2024-06-10 11:30:57.795513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.249 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:01.249 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:24:01.249 11:30:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.249 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:01.249 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 [2024-06-10 11:30:58.629004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 Malloc1 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.510 [2024-06-10 11:30:58.669471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1612943 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:01.510 11:30:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:01.510 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.053 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:04.053 11:31:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.053 11:31:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.053 11:31:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.053 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:04.053 "tick_rate": 2600000000, 00:24:04.053 "poll_groups": [ 00:24:04.053 { 00:24:04.053 "name": "nvmf_tgt_poll_group_000", 00:24:04.053 "admin_qpairs": 1, 00:24:04.053 "io_qpairs": 1, 00:24:04.053 "current_admin_qpairs": 1, 00:24:04.053 "current_io_qpairs": 1, 00:24:04.053 "pending_bdev_io": 0, 00:24:04.053 "completed_nvme_io": 31565, 00:24:04.053 "transports": [ 00:24:04.053 { 00:24:04.053 "trtype": "TCP" 00:24:04.053 } 00:24:04.053 ] 00:24:04.053 }, 00:24:04.053 { 00:24:04.053 "name": "nvmf_tgt_poll_group_001", 00:24:04.053 "admin_qpairs": 0, 00:24:04.053 "io_qpairs": 3, 00:24:04.053 "current_admin_qpairs": 0, 00:24:04.053 "current_io_qpairs": 3, 00:24:04.053 "pending_bdev_io": 0, 00:24:04.053 "completed_nvme_io": 40186, 00:24:04.053 "transports": [ 00:24:04.053 { 00:24:04.053 "trtype": "TCP" 00:24:04.053 } 00:24:04.053 ] 00:24:04.053 }, 00:24:04.053 { 00:24:04.053 "name": "nvmf_tgt_poll_group_002", 00:24:04.053 "admin_qpairs": 0, 00:24:04.053 "io_qpairs": 0, 00:24:04.053 "current_admin_qpairs": 0, 00:24:04.053 "current_io_qpairs": 0, 00:24:04.053 "pending_bdev_io": 0, 00:24:04.053 "completed_nvme_io": 0, 00:24:04.053 "transports": [ 00:24:04.053 { 00:24:04.053 "trtype": "TCP" 00:24:04.053 } 00:24:04.053 ] 00:24:04.053 }, 00:24:04.053 { 00:24:04.053 "name": "nvmf_tgt_poll_group_003", 00:24:04.053 "admin_qpairs": 0, 00:24:04.053 "io_qpairs": 0, 00:24:04.053 "current_admin_qpairs": 0, 00:24:04.053 "current_io_qpairs": 0, 00:24:04.053 "pending_bdev_io": 0, 00:24:04.053 "completed_nvme_io": 0, 00:24:04.053 "transports": [ 00:24:04.053 { 00:24:04.054 "trtype": "TCP" 00:24:04.054 } 00:24:04.054 ] 00:24:04.054 } 00:24:04.054 ] 00:24:04.054 }' 00:24:04.054 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:04.054 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:04.054 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:24:04.054 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:24:04.054 11:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1612943 00:24:12.191 Initializing NVMe Controllers 00:24:12.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:12.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:12.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:12.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:12.191 Initialization complete. Launching workers. 00:24:12.191 ======================================================== 00:24:12.191 Latency(us) 00:24:12.191 Device Information : IOPS MiB/s Average min max 00:24:12.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7024.70 27.44 9141.17 1538.07 54361.20 00:24:12.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 16940.30 66.17 3777.61 1436.28 9413.95 00:24:12.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6538.30 25.54 9818.34 1504.56 54987.16 00:24:12.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7742.50 30.24 8289.67 1200.18 53191.00 00:24:12.191 ======================================================== 00:24:12.191 Total : 38245.80 149.40 6708.86 1200.18 54987.16 00:24:12.191 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.191 rmmod nvme_tcp 00:24:12.191 rmmod nvme_fabrics 00:24:12.191 rmmod nvme_keyring 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1612706 ']' 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1612706 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1612706 ']' 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1612706 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1612706 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1612706' 00:24:12.191 killing process with pid 1612706 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1612706 00:24:12.191 11:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1612706 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.191 11:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.104 11:31:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.104 11:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:14.104 00:24:14.104 real 0m53.430s 00:24:14.104 user 2m49.794s 00:24:14.104 sys 0m11.465s 00:24:14.104 11:31:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:14.104 11:31:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.104 ************************************ 00:24:14.104 END TEST nvmf_perf_adq 00:24:14.104 ************************************ 00:24:14.104 11:31:11 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.104 11:31:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:14.104 11:31:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:14.104 11:31:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.104 ************************************ 00:24:14.104 START TEST nvmf_shutdown 00:24:14.104 ************************************ 00:24:14.104 11:31:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.366 * Looking for test storage... 00:24:14.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 ************************************ 00:24:14.366 START TEST nvmf_shutdown_tc1 00:24:14.366 ************************************ 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.366 11:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.509 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:22.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:22.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:22.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:22.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:24:22.510 00:24:22.510 --- 10.0.0.2 ping statistics --- 00:24:22.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.510 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.491 ms 00:24:22.510 00:24:22.510 --- 10.0.0.1 ping statistics --- 00:24:22.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.510 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1619160 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1619160 00:24:22.510 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1619160 ']' 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.511 11:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:22.511 [2024-06-10 11:31:19.705778] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:22.511 [2024-06-10 11:31:19.705850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.772 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.772 [2024-06-10 11:31:19.779683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.772 [2024-06-10 11:31:19.850868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.772 [2024-06-10 11:31:19.850903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.772 [2024-06-10 11:31:19.850910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.772 [2024-06-10 11:31:19.850916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.772 [2024-06-10 11:31:19.850921] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.772 [2024-06-10 11:31:19.851042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.772 [2024-06-10 11:31:19.851193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.772 [2024-06-10 11:31:19.851340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.772 [2024-06-10 11:31:19.851342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:24:23.343 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:23.343 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:23.343 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.343 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:23.343 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.603 [2024-06-10 11:31:20.598644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.603 11:31:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.603 Malloc1 00:24:23.603 [2024-06-10 11:31:20.698682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.603 Malloc2 00:24:23.603 Malloc3 00:24:23.603 Malloc4 00:24:23.864 Malloc5 00:24:23.864 Malloc6 00:24:23.864 Malloc7 00:24:23.864 Malloc8 00:24:23.864 Malloc9 00:24:23.864 Malloc10 00:24:23.864 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.864 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:23.864 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:23.864 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1619389 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1619389 /var/tmp/bdevperf.sock 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1619389 ']' 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.125 { 00:24:24.125 "params": { 00:24:24.125 "name": "Nvme$subsystem", 00:24:24.125 "trtype": "$TEST_TRANSPORT", 00:24:24.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.125 "adrfam": "ipv4", 00:24:24.125 "trsvcid": "$NVMF_PORT", 00:24:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.125 "hdgst": ${hdgst:-false}, 00:24:24.125 "ddgst": ${ddgst:-false} 00:24:24.125 }, 00:24:24.125 "method": "bdev_nvme_attach_controller" 00:24:24.125 } 00:24:24.125 EOF 00:24:24.125 )") 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.125 { 00:24:24.125 "params": { 00:24:24.125 "name": "Nvme$subsystem", 00:24:24.125 "trtype": "$TEST_TRANSPORT", 00:24:24.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.125 "adrfam": "ipv4", 00:24:24.125 "trsvcid": "$NVMF_PORT", 00:24:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.125 "hdgst": ${hdgst:-false}, 00:24:24.125 "ddgst": ${ddgst:-false} 00:24:24.125 }, 00:24:24.125 "method": "bdev_nvme_attach_controller" 00:24:24.125 } 00:24:24.125 EOF 00:24:24.125 )") 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.125 { 00:24:24.125 "params": { 00:24:24.125 "name": "Nvme$subsystem", 00:24:24.125 "trtype": "$TEST_TRANSPORT", 00:24:24.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.125 "adrfam": "ipv4", 00:24:24.125 "trsvcid": "$NVMF_PORT", 00:24:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.125 "hdgst": ${hdgst:-false}, 00:24:24.125 "ddgst": ${ddgst:-false} 00:24:24.125 }, 00:24:24.125 "method": "bdev_nvme_attach_controller" 00:24:24.125 } 00:24:24.125 EOF 00:24:24.125 )") 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.125 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.125 { 00:24:24.125 "params": { 00:24:24.125 "name": "Nvme$subsystem", 00:24:24.125 "trtype": "$TEST_TRANSPORT", 00:24:24.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.125 "adrfam": "ipv4", 00:24:24.125 "trsvcid": "$NVMF_PORT", 00:24:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 [2024-06-10 11:31:21.147132] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:24.126 [2024-06-10 11:31:21.147185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.126 { 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme$subsystem", 00:24:24.126 "trtype": "$TEST_TRANSPORT", 00:24:24.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "$NVMF_PORT", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.126 "hdgst": ${hdgst:-false}, 00:24:24.126 "ddgst": ${ddgst:-false} 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 } 00:24:24.126 EOF 00:24:24.126 )") 00:24:24.126 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:24.126 11:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme1", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 },{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme2", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 },{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme3", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 },{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme4", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 },{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme5", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.126 },{ 00:24:24.126 "params": { 00:24:24.126 "name": "Nvme6", 00:24:24.126 "trtype": "tcp", 00:24:24.126 "traddr": "10.0.0.2", 00:24:24.126 "adrfam": "ipv4", 00:24:24.126 "trsvcid": "4420", 00:24:24.126 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:24.126 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:24.126 "hdgst": false, 00:24:24.126 "ddgst": false 00:24:24.126 }, 00:24:24.126 "method": "bdev_nvme_attach_controller" 00:24:24.127 },{ 00:24:24.127 "params": { 00:24:24.127 "name": "Nvme7", 00:24:24.127 "trtype": "tcp", 00:24:24.127 "traddr": "10.0.0.2", 00:24:24.127 "adrfam": "ipv4", 00:24:24.127 "trsvcid": "4420", 00:24:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:24.127 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:24.127 "hdgst": false, 00:24:24.127 "ddgst": false 00:24:24.127 }, 00:24:24.127 "method": "bdev_nvme_attach_controller" 00:24:24.127 },{ 00:24:24.127 "params": { 00:24:24.127 "name": "Nvme8", 00:24:24.127 "trtype": "tcp", 00:24:24.127 "traddr": "10.0.0.2", 00:24:24.127 "adrfam": "ipv4", 00:24:24.127 "trsvcid": "4420", 00:24:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:24.127 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:24.127 "hdgst": false, 00:24:24.127 "ddgst": false 00:24:24.127 }, 00:24:24.127 "method": "bdev_nvme_attach_controller" 00:24:24.127 },{ 00:24:24.127 "params": { 00:24:24.127 "name": "Nvme9", 00:24:24.127 "trtype": "tcp", 00:24:24.127 "traddr": "10.0.0.2", 00:24:24.127 "adrfam": "ipv4", 00:24:24.127 "trsvcid": "4420", 00:24:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:24.127 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:24.127 "hdgst": false, 00:24:24.127 "ddgst": false 00:24:24.127 }, 00:24:24.127 "method": "bdev_nvme_attach_controller" 00:24:24.127 },{ 00:24:24.127 "params": { 00:24:24.127 "name": "Nvme10", 00:24:24.127 "trtype": "tcp", 00:24:24.127 "traddr": "10.0.0.2", 00:24:24.127 "adrfam": "ipv4", 00:24:24.127 "trsvcid": "4420", 00:24:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:24.127 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:24.127 "hdgst": false, 00:24:24.127 "ddgst": false 00:24:24.127 }, 00:24:24.127 "method": "bdev_nvme_attach_controller" 00:24:24.127 }' 00:24:24.127 [2024-06-10 11:31:21.230783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.127 [2024-06-10 11:31:21.292835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1619389 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:25.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1619389 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:25.512 11:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1619160 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.895 { 00:24:26.895 "params": { 00:24:26.895 "name": "Nvme$subsystem", 00:24:26.895 "trtype": "$TEST_TRANSPORT", 00:24:26.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.895 "adrfam": "ipv4", 00:24:26.895 "trsvcid": "$NVMF_PORT", 00:24:26.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.895 "hdgst": ${hdgst:-false}, 00:24:26.895 "ddgst": ${ddgst:-false} 00:24:26.895 }, 00:24:26.895 "method": "bdev_nvme_attach_controller" 00:24:26.895 } 00:24:26.895 EOF 00:24:26.895 )") 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.895 { 00:24:26.895 "params": { 00:24:26.895 "name": "Nvme$subsystem", 00:24:26.895 "trtype": "$TEST_TRANSPORT", 00:24:26.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.895 "adrfam": "ipv4", 00:24:26.895 "trsvcid": "$NVMF_PORT", 00:24:26.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.895 "hdgst": ${hdgst:-false}, 00:24:26.895 "ddgst": ${ddgst:-false} 00:24:26.895 }, 00:24:26.895 "method": "bdev_nvme_attach_controller" 00:24:26.895 } 00:24:26.895 EOF 00:24:26.895 )") 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.895 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.895 { 00:24:26.895 "params": { 00:24:26.895 "name": "Nvme$subsystem", 00:24:26.895 "trtype": "$TEST_TRANSPORT", 00:24:26.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.895 "adrfam": "ipv4", 00:24:26.895 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 [2024-06-10 11:31:23.734721] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:26.896 [2024-06-10 11:31:23.734772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619850 ] 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:26.896 { 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme$subsystem", 00:24:26.896 "trtype": "$TEST_TRANSPORT", 00:24:26.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "$NVMF_PORT", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.896 "hdgst": ${hdgst:-false}, 00:24:26.896 "ddgst": ${ddgst:-false} 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 } 00:24:26.896 EOF 00:24:26.896 )") 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:26.896 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:26.896 11:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme1", 00:24:26.896 "trtype": "tcp", 00:24:26.896 "traddr": "10.0.0.2", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "4420", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.896 "hdgst": false, 00:24:26.896 "ddgst": false 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 },{ 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme2", 00:24:26.896 "trtype": "tcp", 00:24:26.896 "traddr": "10.0.0.2", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "4420", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:26.896 "hdgst": false, 00:24:26.896 "ddgst": false 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 },{ 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme3", 00:24:26.896 "trtype": "tcp", 00:24:26.896 "traddr": "10.0.0.2", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "4420", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:26.896 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:26.896 "hdgst": false, 00:24:26.896 "ddgst": false 00:24:26.896 }, 00:24:26.896 "method": "bdev_nvme_attach_controller" 00:24:26.896 },{ 00:24:26.896 "params": { 00:24:26.896 "name": "Nvme4", 00:24:26.896 "trtype": "tcp", 00:24:26.896 "traddr": "10.0.0.2", 00:24:26.896 "adrfam": "ipv4", 00:24:26.896 "trsvcid": "4420", 00:24:26.896 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme5", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme6", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme7", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme8", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme9", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 },{ 00:24:26.897 "params": { 00:24:26.897 "name": "Nvme10", 00:24:26.897 "trtype": "tcp", 00:24:26.897 "traddr": "10.0.0.2", 00:24:26.897 "adrfam": "ipv4", 00:24:26.897 "trsvcid": "4420", 00:24:26.897 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:26.897 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:26.897 "hdgst": false, 00:24:26.897 "ddgst": false 00:24:26.897 }, 00:24:26.897 "method": "bdev_nvme_attach_controller" 00:24:26.897 }' 00:24:26.897 [2024-06-10 11:31:23.816719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.897 [2024-06-10 11:31:23.877950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.279 Running I/O for 1 seconds... 00:24:29.219 00:24:29.219 Latency(us) 00:24:29.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.219 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme1n1 : 1.01 253.71 15.86 0.00 0.00 249388.11 27625.94 217781.17 00:24:29.219 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme2n1 : 1.17 273.18 17.07 0.00 0.00 227457.65 17341.83 227460.33 00:24:29.219 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme3n1 : 1.17 274.13 17.13 0.00 0.00 224264.66 15022.87 217781.17 00:24:29.219 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme4n1 : 1.12 232.21 14.51 0.00 0.00 244901.83 14014.62 229073.53 00:24:29.219 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme5n1 : 1.18 271.49 16.97 0.00 0.00 219402.40 18753.38 227460.33 00:24:29.219 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme6n1 : 1.14 225.23 14.08 0.00 0.00 259244.11 14619.57 233913.11 00:24:29.219 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme7n1 : 1.13 281.96 17.62 0.00 0.00 203568.36 17845.96 238752.69 00:24:29.219 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme8n1 : 1.18 270.77 16.92 0.00 0.00 209737.73 18753.38 232299.91 00:24:29.219 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme9n1 : 1.18 326.04 20.38 0.00 0.00 171137.97 15829.46 222620.75 00:24:29.219 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.219 Verification LBA range: start 0x0 length 0x400 00:24:29.219 Nvme10n1 : 1.19 268.78 16.80 0.00 0.00 204631.91 10889.06 251658.24 00:24:29.219 =================================================================================================================== 00:24:29.219 Total : 2677.50 167.34 0.00 0.00 218507.08 10889.06 251658.24 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.479 rmmod nvme_tcp 00:24:29.479 rmmod nvme_fabrics 00:24:29.479 rmmod nvme_keyring 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1619160 ']' 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1619160 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1619160 ']' 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1619160 00:24:29.479 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1619160 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1619160' 00:24:29.480 killing process with pid 1619160 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1619160 00:24:29.480 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1619160 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.740 11:31:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.285 11:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.285 00:24:32.285 real 0m17.534s 00:24:32.285 user 0m34.554s 00:24:32.285 sys 0m7.270s 00:24:32.285 11:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:32.285 11:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:32.285 ************************************ 00:24:32.285 END TEST nvmf_shutdown_tc1 00:24:32.285 ************************************ 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:32.285 ************************************ 00:24:32.285 START TEST nvmf_shutdown_tc2 00:24:32.285 ************************************ 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:32.285 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:32.285 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.285 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:32.285 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:32.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:24:32.286 00:24:32.286 --- 10.0.0.2 ping statistics --- 00:24:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.286 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:24:32.286 00:24:32.286 --- 10.0.0.1 ping statistics --- 00:24:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.286 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1620876 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1620876 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1620876 ']' 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:32.286 11:31:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:32.286 [2024-06-10 11:31:29.503649] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:32.286 [2024-06-10 11:31:29.503709] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.547 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.547 [2024-06-10 11:31:29.577962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.547 [2024-06-10 11:31:29.652345] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.547 [2024-06-10 11:31:29.652381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.547 [2024-06-10 11:31:29.652388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.547 [2024-06-10 11:31:29.652394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.547 [2024-06-10 11:31:29.652400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.547 [2024-06-10 11:31:29.652515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.547 [2024-06-10 11:31:29.652663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.547 [2024-06-10 11:31:29.652809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.547 [2024-06-10 11:31:29.652811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 [2024-06-10 11:31:30.398506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.490 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 Malloc1 00:24:33.490 [2024-06-10 11:31:30.494638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.490 Malloc2 00:24:33.490 Malloc3 00:24:33.490 Malloc4 00:24:33.490 Malloc5 00:24:33.490 Malloc6 00:24:33.490 Malloc7 00:24:33.829 Malloc8 00:24:33.829 Malloc9 00:24:33.829 Malloc10 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1621224 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1621224 /var/tmp/bdevperf.sock 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1621224 ']' 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.829 "ddgst": ${ddgst:-false} 00:24:33.829 }, 00:24:33.829 "method": "bdev_nvme_attach_controller" 00:24:33.829 } 00:24:33.829 EOF 00:24:33.829 )") 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.829 [2024-06-10 11:31:30.936247] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:33.829 [2024-06-10 11:31:30.936298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621224 ] 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.829 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.829 { 00:24:33.829 "params": { 00:24:33.829 "name": "Nvme$subsystem", 00:24:33.829 "trtype": "$TEST_TRANSPORT", 00:24:33.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.829 "adrfam": "ipv4", 00:24:33.829 "trsvcid": "$NVMF_PORT", 00:24:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.829 "hdgst": ${hdgst:-false}, 00:24:33.830 "ddgst": ${ddgst:-false} 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 } 00:24:33.830 EOF 00:24:33.830 )") 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.830 { 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme$subsystem", 00:24:33.830 "trtype": "$TEST_TRANSPORT", 00:24:33.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "$NVMF_PORT", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.830 "hdgst": ${hdgst:-false}, 00:24:33.830 "ddgst": ${ddgst:-false} 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 } 00:24:33.830 EOF 00:24:33.830 )") 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.830 { 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme$subsystem", 00:24:33.830 "trtype": "$TEST_TRANSPORT", 00:24:33.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "$NVMF_PORT", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.830 "hdgst": ${hdgst:-false}, 00:24:33.830 "ddgst": ${ddgst:-false} 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 } 00:24:33.830 EOF 00:24:33.830 )") 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.830 { 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme$subsystem", 00:24:33.830 "trtype": "$TEST_TRANSPORT", 00:24:33.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "$NVMF_PORT", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.830 "hdgst": ${hdgst:-false}, 00:24:33.830 "ddgst": ${ddgst:-false} 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 } 00:24:33.830 EOF 00:24:33.830 )") 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:33.830 11:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme1", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme2", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme3", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme4", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme5", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme6", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme7", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme8", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme9", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 },{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme10", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 }' 00:24:33.830 [2024-06-10 11:31:31.017691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.104 [2024-06-10 11:31:31.079183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.489 Running I/O for 10 seconds... 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:35.489 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.755 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1621224 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1621224 ']' 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1621224 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:36.016 11:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1621224 00:24:36.016 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:36.016 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:36.016 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1621224' 00:24:36.016 killing process with pid 1621224 00:24:36.016 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1621224 00:24:36.016 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1621224 00:24:36.016 Received shutdown signal, test time was about 0.629377 seconds 00:24:36.016 00:24:36.016 Latency(us) 00:24:36.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.016 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme1n1 : 0.61 312.43 19.53 0.00 0.00 201357.78 16535.24 224233.94 00:24:36.016 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme2n1 : 0.63 306.78 19.17 0.00 0.00 198991.29 23996.26 211328.39 00:24:36.016 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme3n1 : 0.62 309.21 19.33 0.00 0.00 191861.63 16434.41 227460.33 00:24:36.016 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme4n1 : 0.62 307.26 19.20 0.00 0.00 187042.26 18551.73 217781.17 00:24:36.016 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme5n1 : 0.63 305.46 19.09 0.00 0.00 182795.55 13208.02 221007.56 00:24:36.016 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme6n1 : 0.61 211.46 13.22 0.00 0.00 253934.28 18955.03 209715.20 00:24:36.016 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme7n1 : 0.60 214.15 13.38 0.00 0.00 241434.39 22181.42 202455.83 00:24:36.016 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.016 Nvme8n1 : 0.60 213.83 13.36 0.00 0.00 231469.29 20568.22 225847.14 00:24:36.016 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.016 Verification LBA range: start 0x0 length 0x400 00:24:36.017 Nvme9n1 : 0.61 209.98 13.12 0.00 0.00 229993.55 17341.83 225847.14 00:24:36.017 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:36.017 Verification LBA range: start 0x0 length 0x400 00:24:36.017 Nvme10n1 : 0.62 207.56 12.97 0.00 0.00 224835.74 20870.70 258111.02 00:24:36.017 =================================================================================================================== 00:24:36.017 Total : 2598.12 162.38 0.00 0.00 209979.20 13208.02 258111.02 00:24:36.017 11:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1620876 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.403 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.403 rmmod nvme_tcp 00:24:37.403 rmmod nvme_fabrics 00:24:37.403 rmmod nvme_keyring 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1620876 ']' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1620876 ']' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1620876' 00:24:37.404 killing process with pid 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1620876 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.404 11:31:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.949 00:24:39.949 real 0m7.613s 00:24:39.949 user 0m22.399s 00:24:39.949 sys 0m1.190s 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:39.949 ************************************ 00:24:39.949 END TEST nvmf_shutdown_tc2 00:24:39.949 ************************************ 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:39.949 ************************************ 00:24:39.949 START TEST nvmf_shutdown_tc3 00:24:39.949 ************************************ 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.949 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.950 11:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:24:39.950 00:24:39.950 --- 10.0.0.2 ping statistics --- 00:24:39.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.950 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:24:39.950 00:24:39.950 --- 10.0.0.1 ping statistics --- 00:24:39.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.950 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1622268 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1622268 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1622268 ']' 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.950 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:39.950 [2024-06-10 11:31:37.152136] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:39.950 [2024-06-10 11:31:37.152182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.210 [2024-06-10 11:31:37.224127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.210 [2024-06-10 11:31:37.286560] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.210 [2024-06-10 11:31:37.286597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.210 [2024-06-10 11:31:37.286604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.210 [2024-06-10 11:31:37.286610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.210 [2024-06-10 11:31:37.286615] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.210 [2024-06-10 11:31:37.286734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.210 [2024-06-10 11:31:37.286886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.210 [2024-06-10 11:31:37.287037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.210 [2024-06-10 11:31:37.287038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:40.781 [2024-06-10 11:31:37.978351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:40.781 11:31:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:40.781 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.041 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.041 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.041 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.041 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.041 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.042 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.042 Malloc1 00:24:41.042 [2024-06-10 11:31:38.078849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.042 Malloc2 00:24:41.042 Malloc3 00:24:41.042 Malloc4 00:24:41.042 Malloc5 00:24:41.042 Malloc6 00:24:41.303 Malloc7 00:24:41.303 Malloc8 00:24:41.303 Malloc9 00:24:41.303 Malloc10 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1622621 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1622621 /var/tmp/bdevperf.sock 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1622621 ']' 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.303 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.303 { 00:24:41.303 "params": { 00:24:41.303 "name": "Nvme$subsystem", 00:24:41.303 "trtype": "$TEST_TRANSPORT", 00:24:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.303 "adrfam": "ipv4", 00:24:41.303 "trsvcid": "$NVMF_PORT", 00:24:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.303 "hdgst": ${hdgst:-false}, 00:24:41.303 "ddgst": ${ddgst:-false} 00:24:41.303 }, 00:24:41.303 "method": "bdev_nvme_attach_controller" 00:24:41.303 } 00:24:41.303 EOF 00:24:41.303 )") 00:24:41.304 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.304 [2024-06-10 11:31:38.520199] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:41.304 [2024-06-10 11:31:38.520252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622621 ] 00:24:41.304 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.304 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.304 { 00:24:41.304 "params": { 00:24:41.304 "name": "Nvme$subsystem", 00:24:41.304 "trtype": "$TEST_TRANSPORT", 00:24:41.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.304 "adrfam": "ipv4", 00:24:41.304 "trsvcid": "$NVMF_PORT", 00:24:41.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.304 "hdgst": ${hdgst:-false}, 00:24:41.304 "ddgst": ${ddgst:-false} 00:24:41.304 }, 00:24:41.304 "method": "bdev_nvme_attach_controller" 00:24:41.304 } 00:24:41.304 EOF 00:24:41.304 )") 00:24:41.304 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.564 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.564 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.564 { 00:24:41.564 "params": { 00:24:41.564 "name": "Nvme$subsystem", 00:24:41.564 "trtype": "$TEST_TRANSPORT", 00:24:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.564 "adrfam": "ipv4", 00:24:41.564 "trsvcid": "$NVMF_PORT", 00:24:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.564 "hdgst": ${hdgst:-false}, 00:24:41.564 "ddgst": ${ddgst:-false} 00:24:41.564 }, 00:24:41.564 "method": "bdev_nvme_attach_controller" 00:24:41.564 } 00:24:41.564 EOF 00:24:41.564 )") 00:24:41.564 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.564 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.564 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.564 { 00:24:41.564 "params": { 00:24:41.564 "name": "Nvme$subsystem", 00:24:41.564 "trtype": "$TEST_TRANSPORT", 00:24:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.564 "adrfam": "ipv4", 00:24:41.564 "trsvcid": "$NVMF_PORT", 00:24:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.564 "hdgst": ${hdgst:-false}, 00:24:41.564 "ddgst": ${ddgst:-false} 00:24:41.564 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 } 00:24:41.565 EOF 00:24:41.565 )") 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.565 { 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme$subsystem", 00:24:41.565 "trtype": "$TEST_TRANSPORT", 00:24:41.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "$NVMF_PORT", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.565 "hdgst": ${hdgst:-false}, 00:24:41.565 "ddgst": ${ddgst:-false} 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 } 00:24:41.565 EOF 00:24:41.565 )") 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:41.565 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:41.565 11:31:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme1", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme2", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme3", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme4", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme5", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme6", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme7", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme8", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme9", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 },{ 00:24:41.565 "params": { 00:24:41.565 "name": "Nvme10", 00:24:41.565 "trtype": "tcp", 00:24:41.565 "traddr": "10.0.0.2", 00:24:41.565 "adrfam": "ipv4", 00:24:41.565 "trsvcid": "4420", 00:24:41.565 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:41.565 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:41.565 "hdgst": false, 00:24:41.565 "ddgst": false 00:24:41.565 }, 00:24:41.565 "method": "bdev_nvme_attach_controller" 00:24:41.565 }' 00:24:41.565 [2024-06-10 11:31:38.601533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.565 [2024-06-10 11:31:38.664226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.949 Running I/O for 10 seconds... 00:24:42.949 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:42.949 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:24:42.949 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:42.949 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.949 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:43.209 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:43.468 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.728 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:44.001 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1622268 00:24:44.002 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1622268 ']' 00:24:44.002 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1622268 00:24:44.002 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:24:44.002 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:44.002 11:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1622268 00:24:44.002 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:44.002 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:44.002 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1622268' 00:24:44.002 killing process with pid 1622268 00:24:44.002 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1622268 00:24:44.002 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1622268 00:24:44.002 [2024-06-10 11:31:41.018459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018643] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.002 [2024-06-10 11:31:41.018878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.018940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1990 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020086] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020380] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.003 [2024-06-10 11:31:41.020452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.020458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.020464] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfd90 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.021172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2425e60 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.021301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.004 [2024-06-10 11:31:41.021353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.004 [2024-06-10 11:31:41.021359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2bb0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022428] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022555] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022561] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.022579] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06f0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.023060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0bb0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.023085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0bb0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.023680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560830 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.023697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560830 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.024793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.004 [2024-06-10 11:31:41.024819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.024997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.025214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560cd0 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.005 [2024-06-10 11:31:41.026079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026232] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026263] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026342] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1050 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.026996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.006 [2024-06-10 11:31:41.027002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027263] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.007 [2024-06-10 11:31:41.027282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14f0 is same with the state(5) to be set 00:24:44.008 [2024-06-10 11:31:41.040692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.040991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.040998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.008 [2024-06-10 11:31:41.041242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.008 [2024-06-10 11:31:41.041249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.041766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.009 [2024-06-10 11:31:41.041811] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26830c0 was disconnected and freed. reset controller. 00:24:44.009 [2024-06-10 11:31:41.042193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.042214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.009 [2024-06-10 11:31:41.042226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.009 [2024-06-10 11:31:41.042233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.010 [2024-06-10 11:31:41.042761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.010 [2024-06-10 11:31:41.042768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.042991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.042997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.011 [2024-06-10 11:31:41.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.011 [2024-06-10 11:31:41.043280] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26845e0 was disconnected and freed. reset controller. 00:24:44.011 [2024-06-10 11:31:41.043510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.011 [2024-06-10 11:31:41.043526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.011 [2024-06-10 11:31:41.043541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.011 [2024-06-10 11:31:41.043558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.011 [2024-06-10 11:31:41.043573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.011 [2024-06-10 11:31:41.043579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb940 is same with the state(5) to be set 00:24:44.011 [2024-06-10 11:31:41.043608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f29610 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.043689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f53c0 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.043772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454d50 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.043859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ed120 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.043935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.043990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.043996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ea4c0 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.044015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0640 is same with the state(5) to be set 00:24:44.012 [2024-06-10 11:31:41.044088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2425e60 (9): Bad file descriptor 00:24:44.012 [2024-06-10 11:31:41.044110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.012 [2024-06-10 11:31:41.044157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.012 [2024-06-10 11:31:41.044164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24510c0 is same with the state(5) to be set 00:24:44.013 [2024-06-10 11:31:41.044187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2bb0 (9): Bad file descriptor 00:24:44.013 [2024-06-10 11:31:41.044251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.044586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.044595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.013 [2024-06-10 11:31:41.051410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.013 [2024-06-10 11:31:41.051418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.051932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.014 [2024-06-10 11:31:41.051939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.014 [2024-06-10 11:31:41.052010] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26534d0 was disconnected and freed. reset controller. 00:24:44.014 [2024-06-10 11:31:41.052126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.015 [2024-06-10 11:31:41.052679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.015 [2024-06-10 11:31:41.052689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.052988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.052994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.016 [2024-06-10 11:31:41.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.016 [2024-06-10 11:31:41.053180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680690 is same with the state(5) to be set 00:24:44.016 [2024-06-10 11:31:41.053221] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2680690 was disconnected and freed. reset controller. 00:24:44.016 [2024-06-10 11:31:41.055636] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:44.016 [2024-06-10 11:31:41.055667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ed120 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24eb940 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29610 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f53c0 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454d50 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea4c0 (9): Bad file descriptor 00:24:44.016 [2024-06-10 11:31:41.055770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f0640 (9): Bad file descriptor 00:24:44.017 [2024-06-10 11:31:41.055784] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.017 [2024-06-10 11:31:41.055795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24510c0 (9): Bad file descriptor 00:24:44.017 [2024-06-10 11:31:41.058220] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:44.017 [2024-06-10 11:31:41.058252] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.017 [2024-06-10 11:31:41.058999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.017 [2024-06-10 11:31:41.059535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.017 [2024-06-10 11:31:41.059543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.059986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.059994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.060002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.060010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.018 [2024-06-10 11:31:41.060017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.018 [2024-06-10 11:31:41.060026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.060033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.060042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.060059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.060066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.062513] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:44.019 [2024-06-10 11:31:41.062547] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:44.019 [2024-06-10 11:31:41.062922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.019 [2024-06-10 11:31:41.062939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ed120 with addr=10.0.0.2, port=4420 00:24:44.019 [2024-06-10 11:31:41.062948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ed120 is same with the state(5) to be set 00:24:44.019 [2024-06-10 11:31:41.063134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.019 [2024-06-10 11:31:41.063144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ea4c0 with addr=10.0.0.2, port=4420 00:24:44.019 [2024-06-10 11:31:41.063150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ea4c0 is same with the state(5) to be set 00:24:44.019 [2024-06-10 11:31:41.063477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.019 [2024-06-10 11:31:41.063488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2425e60 with addr=10.0.0.2, port=4420 00:24:44.019 [2024-06-10 11:31:41.063494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2425e60 is same with the state(5) to be set 00:24:44.019 [2024-06-10 11:31:41.063804] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:44.019 [2024-06-10 11:31:41.064091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.019 [2024-06-10 11:31:41.064517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.019 [2024-06-10 11:31:41.064526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.064984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.064991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.065007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.065023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.065040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.020 [2024-06-10 11:31:41.065072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.020 [2024-06-10 11:31:41.065081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.065088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.065097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.065105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.065114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.065120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.065130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.065137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.065145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2681bf0 is same with the state(5) to be set 00:24:44.021 [2024-06-10 11:31:41.065186] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2681bf0 was disconnected and freed. reset controller. 00:24:44.021 [2024-06-10 11:31:41.065245] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:44.021 [2024-06-10 11:31:41.065283] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:44.021 [2024-06-10 11:31:41.065319] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:44.021 [2024-06-10 11:31:41.065651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.021 [2024-06-10 11:31:41.065665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24510c0 with addr=10.0.0.2, port=4420 00:24:44.021 [2024-06-10 11:31:41.065673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24510c0 is same with the state(5) to be set 00:24:44.021 [2024-06-10 11:31:41.065872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.021 [2024-06-10 11:31:41.065883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d2bb0 with addr=10.0.0.2, port=4420 00:24:44.021 [2024-06-10 11:31:41.065890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2bb0 is same with the state(5) to be set 00:24:44.021 [2024-06-10 11:31:41.065902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ed120 (9): Bad file descriptor 00:24:44.021 [2024-06-10 11:31:41.065913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea4c0 (9): Bad file descriptor 00:24:44.021 [2024-06-10 11:31:41.065922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2425e60 (9): Bad file descriptor 00:24:44.021 [2024-06-10 11:31:41.067311] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:44.021 [2024-06-10 11:31:41.067340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24510c0 (9): Bad file descriptor 00:24:44.021 [2024-06-10 11:31:41.067350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2bb0 (9): Bad file descriptor 00:24:44.021 [2024-06-10 11:31:41.067364] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:44.021 [2024-06-10 11:31:41.067373] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:44.021 [2024-06-10 11:31:41.067382] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:44.021 [2024-06-10 11:31:41.067396] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:44.021 [2024-06-10 11:31:41.067403] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:44.021 [2024-06-10 11:31:41.067410] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:44.021 [2024-06-10 11:31:41.067420] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.021 [2024-06-10 11:31:41.067428] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.021 [2024-06-10 11:31:41.067434] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.021 [2024-06-10 11:31:41.067532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.021 [2024-06-10 11:31:41.067544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.021 [2024-06-10 11:31:41.067550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.021 [2024-06-10 11:31:41.067762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.021 [2024-06-10 11:31:41.067776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454d50 with addr=10.0.0.2, port=4420 00:24:44.021 [2024-06-10 11:31:41.067784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454d50 is same with the state(5) to be set 00:24:44.021 [2024-06-10 11:31:41.067791] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:44.021 [2024-06-10 11:31:41.067797] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:44.021 [2024-06-10 11:31:41.067804] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:44.021 [2024-06-10 11:31:41.067815] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:44.021 [2024-06-10 11:31:41.067827] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:44.021 [2024-06-10 11:31:41.067834] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:44.021 [2024-06-10 11:31:41.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.067985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.067994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.021 [2024-06-10 11:31:41.068094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.021 [2024-06-10 11:31:41.068102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.022 [2024-06-10 11:31:41.068658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.022 [2024-06-10 11:31:41.068665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.068931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.068939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267f200 is same with the state(5) to be set 00:24:44.023 [2024-06-10 11:31:41.070348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.023 [2024-06-10 11:31:41.070549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.023 [2024-06-10 11:31:41.070557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.070987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.070995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.024 [2024-06-10 11:31:41.071120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.024 [2024-06-10 11:31:41.071126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.071405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.071414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2685b00 is same with the state(5) to be set 00:24:44.025 [2024-06-10 11:31:41.072585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.025 [2024-06-10 11:31:41.072869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.025 [2024-06-10 11:31:41.072876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.072990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.072999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.026 [2024-06-10 11:31:41.073422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.026 [2024-06-10 11:31:41.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.073662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.073670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2687020 is same with the state(5) to be set 00:24:44.027 [2024-06-10 11:31:41.074893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.074919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.074938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.074974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.074992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.074999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.027 [2024-06-10 11:31:41.075206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.027 [2024-06-10 11:31:41.075215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.028 [2024-06-10 11:31:41.075678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.028 [2024-06-10 11:31:41.075688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.029 [2024-06-10 11:31:41.075942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.029 [2024-06-10 11:31:41.075949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421320 is same with the state(5) to be set 00:24:44.029 [2024-06-10 11:31:41.077861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.029 [2024-06-10 11:31:41.077881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.029 [2024-06-10 11:31:41.077888] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:44.029 [2024-06-10 11:31:41.077898] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:44.029 [2024-06-10 11:31:41.077907] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:44.029 [2024-06-10 11:31:41.077940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454d50 (9): Bad file descriptor 00:24:44.029 [2024-06-10 11:31:41.077991] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.029 [2024-06-10 11:31:41.078005] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.029 task offset: 29824 on job bdev=Nvme5n1 fails 00:24:44.029 00:24:44.029 Latency(us) 00:24:44.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.029 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme1n1 ended in about 0.93 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme1n1 : 0.93 206.75 12.92 68.92 0.00 229721.21 17341.83 227460.33 00:24:44.029 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme2n1 : 0.94 203.86 12.74 67.95 0.00 228595.00 17039.36 233913.11 00:24:44.029 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme3n1 ended in about 0.93 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme3n1 : 0.93 206.50 12.91 68.83 0.00 221201.33 17039.36 235526.30 00:24:44.029 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme4n1 ended in about 0.94 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme4n1 : 0.94 270.56 16.91 68.17 0.00 176319.90 17946.78 206488.81 00:24:44.029 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme5n1 ended in about 0.93 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme5n1 : 0.93 207.30 12.96 69.10 0.00 211613.74 12905.55 227460.33 00:24:44.029 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme6n1 ended in about 0.93 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme6n1 : 0.93 207.07 12.94 69.02 0.00 207484.75 12451.84 229073.53 00:24:44.029 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme7n1 ended in about 0.94 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme7n1 : 0.94 203.33 12.71 67.78 0.00 207386.39 20064.10 214554.78 00:24:44.029 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme8n1 : 0.95 202.85 12.68 67.62 0.00 203555.25 17543.48 229073.53 00:24:44.029 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme9n1 ended in about 0.95 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme9n1 : 0.95 134.91 8.43 67.45 0.00 266386.77 19257.50 243592.27 00:24:44.029 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:44.029 Job: Nvme10n1 ended in about 0.93 seconds with error 00:24:44.029 Verification LBA range: start 0x0 length 0x400 00:24:44.029 Nvme10n1 : 0.93 137.20 8.57 68.60 0.00 255324.69 32667.18 232299.91 00:24:44.029 =================================================================================================================== 00:24:44.029 Total : 1980.32 123.77 683.44 0.00 217596.27 12451.84 243592.27 00:24:44.029 [2024-06-10 11:31:41.105052] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:44.029 [2024-06-10 11:31:41.105099] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:44.029 [2024-06-10 11:31:41.105565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.029 [2024-06-10 11:31:41.105584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f0640 with addr=10.0.0.2, port=4420 00:24:44.029 [2024-06-10 11:31:41.105593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0640 is same with the state(5) to be set 00:24:44.029 [2024-06-10 11:31:41.105912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.105923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f29610 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.105930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f29610 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.106264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.106275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24eb940 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.106282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24eb940 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.106290] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.106296] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.106304] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:44.030 [2024-06-10 11:31:41.107324] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.030 [2024-06-10 11:31:41.107339] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:44.030 [2024-06-10 11:31:41.107347] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:44.030 [2024-06-10 11:31:41.107355] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:44.030 [2024-06-10 11:31:41.107364] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:44.030 [2024-06-10 11:31:41.107373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.030 [2024-06-10 11:31:41.107783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.107796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f53c0 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.107803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f53c0 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.107816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f0640 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.107829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29610 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.107838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24eb940 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.107875] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.030 [2024-06-10 11:31:41.107887] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.030 [2024-06-10 11:31:41.107896] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:44.030 [2024-06-10 11:31:41.108503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.108519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2425e60 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.108526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2425e60 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.108849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.108860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ea4c0 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.108867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ea4c0 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.109207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.109217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ed120 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.109224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ed120 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.109410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.109423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d2bb0 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.109430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2bb0 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.109746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.109756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24510c0 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.109762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24510c0 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.109772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f53c0 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109782] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.109788] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.109795] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:44.030 [2024-06-10 11:31:41.109807] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.109816] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.109826] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:44.030 [2024-06-10 11:31:41.109836] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.109842] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.109849] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:44.030 [2024-06-10 11:31:41.109906] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:44.030 [2024-06-10 11:31:41.109917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.030 [2024-06-10 11:31:41.109923] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.030 [2024-06-10 11:31:41.109929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.030 [2024-06-10 11:31:41.109943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2425e60 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea4c0 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ed120 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2bb0 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24510c0 (9): Bad file descriptor 00:24:44.030 [2024-06-10 11:31:41.109984] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.109990] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.109997] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:44.030 [2024-06-10 11:31:41.110023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.030 [2024-06-10 11:31:41.110337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.030 [2024-06-10 11:31:41.110348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454d50 with addr=10.0.0.2, port=4420 00:24:44.030 [2024-06-10 11:31:41.110355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454d50 is same with the state(5) to be set 00:24:44.030 [2024-06-10 11:31:41.110363] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.110369] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.110375] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.030 [2024-06-10 11:31:41.110385] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.110391] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.110397] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:44.030 [2024-06-10 11:31:41.110406] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.110413] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.110419] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:44.030 [2024-06-10 11:31:41.110431] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.110438] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:44.030 [2024-06-10 11:31:41.110445] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:44.030 [2024-06-10 11:31:41.110454] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:44.030 [2024-06-10 11:31:41.110459] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:44.031 [2024-06-10 11:31:41.110466] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:44.031 [2024-06-10 11:31:41.110494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.031 [2024-06-10 11:31:41.110501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.031 [2024-06-10 11:31:41.110507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.031 [2024-06-10 11:31:41.110513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.031 [2024-06-10 11:31:41.110518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.031 [2024-06-10 11:31:41.110525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454d50 (9): Bad file descriptor 00:24:44.031 [2024-06-10 11:31:41.110549] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:44.031 [2024-06-10 11:31:41.110556] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:44.031 [2024-06-10 11:31:41.110563] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:44.031 [2024-06-10 11:31:41.110589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.290 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:44.290 11:31:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:45.229 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1622621 00:24:45.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1622621) - No such process 00:24:45.229 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:45.229 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:45.229 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:45.229 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.230 rmmod nvme_tcp 00:24:45.230 rmmod nvme_fabrics 00:24:45.230 rmmod nvme_keyring 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.230 11:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.771 00:24:47.771 real 0m7.705s 00:24:47.771 user 0m18.835s 00:24:47.771 sys 0m1.162s 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 ************************************ 00:24:47.771 END TEST nvmf_shutdown_tc3 00:24:47.771 ************************************ 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:47.771 00:24:47.771 real 0m33.232s 00:24:47.771 user 1m15.939s 00:24:47.771 sys 0m9.870s 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:47.771 11:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 ************************************ 00:24:47.771 END TEST nvmf_shutdown 00:24:47.771 ************************************ 00:24:47.771 11:31:44 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 11:31:44 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 11:31:44 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:24:47.771 11:31:44 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:47.771 11:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 ************************************ 00:24:47.771 START TEST nvmf_multicontroller 00:24:47.771 ************************************ 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:47.771 * Looking for test storage... 00:24:47.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.771 11:31:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.772 11:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.904 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.904 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:55.904 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:55.904 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.904 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:24:55.905 00:24:55.905 --- 10.0.0.2 ping statistics --- 00:24:55.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.905 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:24:55.905 00:24:55.905 --- 10.0.0.1 ping statistics --- 00:24:55.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.905 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1627680 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1627680 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1627680 ']' 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:55.905 11:31:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.905 [2024-06-10 11:31:52.996196] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:55.905 [2024-06-10 11:31:52.996258] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.905 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.905 [2024-06-10 11:31:53.070161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:56.165 [2024-06-10 11:31:53.142997] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.165 [2024-06-10 11:31:53.143035] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.165 [2024-06-10 11:31:53.143042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.165 [2024-06-10 11:31:53.143048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.165 [2024-06-10 11:31:53.143053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.165 [2024-06-10 11:31:53.143194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.165 [2024-06-10 11:31:53.143344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.165 [2024-06-10 11:31:53.143345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.736 [2024-06-10 11:31:53.898280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.736 Malloc0 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.736 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.997 [2024-06-10 11:31:53.963769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.997 [2024-06-10 11:31:53.975718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.997 Malloc1 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.997 11:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:56.997 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1627820 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1627820 /var/tmp/bdevperf.sock 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1627820 ']' 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:56.998 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.941 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:57.941 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:24:57.941 11:31:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:57.941 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.941 11:31:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.941 NVMe0n1 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.941 1 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.941 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.941 request: 00:24:57.941 { 00:24:57.941 "name": "NVMe0", 00:24:57.941 "trtype": "tcp", 00:24:57.941 "traddr": "10.0.0.2", 00:24:57.941 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:57.941 "hostaddr": "10.0.0.2", 00:24:57.941 "hostsvcid": "60000", 00:24:57.941 "adrfam": "ipv4", 00:24:57.941 "trsvcid": "4420", 00:24:57.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.941 "method": "bdev_nvme_attach_controller", 00:24:57.941 "req_id": 1 00:24:57.941 } 00:24:57.941 Got JSON-RPC error response 00:24:57.942 response: 00:24:57.942 { 00:24:57.942 "code": -114, 00:24:57.942 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:57.942 } 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.942 request: 00:24:57.942 { 00:24:57.942 "name": "NVMe0", 00:24:57.942 "trtype": "tcp", 00:24:57.942 "traddr": "10.0.0.2", 00:24:57.942 "hostaddr": "10.0.0.2", 00:24:57.942 "hostsvcid": "60000", 00:24:57.942 "adrfam": "ipv4", 00:24:57.942 "trsvcid": "4420", 00:24:57.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:57.942 "method": "bdev_nvme_attach_controller", 00:24:57.942 "req_id": 1 00:24:57.942 } 00:24:57.942 Got JSON-RPC error response 00:24:57.942 response: 00:24:57.942 { 00:24:57.942 "code": -114, 00:24:57.942 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:57.942 } 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.942 request: 00:24:57.942 { 00:24:57.942 "name": "NVMe0", 00:24:57.942 "trtype": "tcp", 00:24:57.942 "traddr": "10.0.0.2", 00:24:57.942 "hostaddr": "10.0.0.2", 00:24:57.942 "hostsvcid": "60000", 00:24:57.942 "adrfam": "ipv4", 00:24:57.942 "trsvcid": "4420", 00:24:57.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.942 "multipath": "disable", 00:24:57.942 "method": "bdev_nvme_attach_controller", 00:24:57.942 "req_id": 1 00:24:57.942 } 00:24:57.942 Got JSON-RPC error response 00:24:57.942 response: 00:24:57.942 { 00:24:57.942 "code": -114, 00:24:57.942 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:57.942 } 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.942 request: 00:24:57.942 { 00:24:57.942 "name": "NVMe0", 00:24:57.942 "trtype": "tcp", 00:24:57.942 "traddr": "10.0.0.2", 00:24:57.942 "hostaddr": "10.0.0.2", 00:24:57.942 "hostsvcid": "60000", 00:24:57.942 "adrfam": "ipv4", 00:24:57.942 "trsvcid": "4420", 00:24:57.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.942 "multipath": "failover", 00:24:57.942 "method": "bdev_nvme_attach_controller", 00:24:57.942 "req_id": 1 00:24:57.942 } 00:24:57.942 Got JSON-RPC error response 00:24:57.942 response: 00:24:57.942 { 00:24:57.942 "code": -114, 00:24:57.942 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:57.942 } 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.942 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.203 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.203 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:58.203 11:31:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.589 0 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1627820 ']' 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1627820' 00:24:59.589 killing process with pid 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1627820 00:24:59.589 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:24:59.590 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:59.590 [2024-06-10 11:31:54.095178] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:24:59.590 [2024-06-10 11:31:54.095233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627820 ] 00:24:59.590 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.590 [2024-06-10 11:31:54.176487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.590 [2024-06-10 11:31:54.238300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.590 [2024-06-10 11:31:55.343659] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name eb4f1273-37c3-498b-8b7a-7d2d95a04418 already exists 00:24:59.590 [2024-06-10 11:31:55.343689] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:eb4f1273-37c3-498b-8b7a-7d2d95a04418 alias for bdev NVMe1n1 00:24:59.590 [2024-06-10 11:31:55.343699] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:59.590 Running I/O for 1 seconds... 00:24:59.590 00:24:59.590 Latency(us) 00:24:59.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.590 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:59.590 NVMe0n1 : 1.00 21927.33 85.65 0.00 0.00 5822.59 3680.10 13208.02 00:24:59.590 =================================================================================================================== 00:24:59.590 Total : 21927.33 85.65 0.00 0.00 5822.59 3680.10 13208.02 00:24:59.590 Received shutdown signal, test time was about 1.000000 seconds 00:24:59.590 00:24:59.590 Latency(us) 00:24:59.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.590 =================================================================================================================== 00:24:59.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.590 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.590 rmmod nvme_tcp 00:24:59.590 rmmod nvme_fabrics 00:24:59.590 rmmod nvme_keyring 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1627680 ']' 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1627680 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1627680 ']' 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1627680 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:59.590 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1627680 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1627680' 00:24:59.851 killing process with pid 1627680 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1627680 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1627680 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.851 11:31:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.851 11:31:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.851 11:31:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.399 11:31:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.399 00:25:02.399 real 0m14.433s 00:25:02.399 user 0m16.910s 00:25:02.399 sys 0m6.789s 00:25:02.399 11:31:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:02.399 11:31:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:02.399 ************************************ 00:25:02.399 END TEST nvmf_multicontroller 00:25:02.399 ************************************ 00:25:02.399 11:31:59 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:02.399 11:31:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:02.399 11:31:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:02.399 11:31:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.399 ************************************ 00:25:02.399 START TEST nvmf_aer 00:25:02.399 ************************************ 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:02.399 * Looking for test storage... 00:25:02.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.399 11:31:59 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.400 11:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:10.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:10.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:10.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.617 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:10.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:25:10.618 00:25:10.618 --- 10.0.0.2 ping statistics --- 00:25:10.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.618 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:25:10.618 00:25:10.618 --- 10.0.0.1 ping statistics --- 00:25:10.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.618 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1632666 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1632666 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1632666 ']' 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:10.618 11:32:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:10.618 [2024-06-10 11:32:07.587952] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:10.618 [2024-06-10 11:32:07.588015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.618 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.618 [2024-06-10 11:32:07.678570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.618 [2024-06-10 11:32:07.773316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.618 [2024-06-10 11:32:07.773374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.618 [2024-06-10 11:32:07.773382] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.618 [2024-06-10 11:32:07.773389] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.618 [2024-06-10 11:32:07.773395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.618 [2024-06-10 11:32:07.773536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.618 [2024-06-10 11:32:07.773696] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.618 [2024-06-10 11:32:07.774124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.618 [2024-06-10 11:32:07.774128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 [2024-06-10 11:32:08.493417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 Malloc0 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 [2024-06-10 11:32:08.549634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 [ 00:25:11.559 { 00:25:11.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:11.559 "subtype": "Discovery", 00:25:11.559 "listen_addresses": [], 00:25:11.559 "allow_any_host": true, 00:25:11.559 "hosts": [] 00:25:11.559 }, 00:25:11.559 { 00:25:11.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.559 "subtype": "NVMe", 00:25:11.559 "listen_addresses": [ 00:25:11.559 { 00:25:11.559 "trtype": "TCP", 00:25:11.559 "adrfam": "IPv4", 00:25:11.559 "traddr": "10.0.0.2", 00:25:11.559 "trsvcid": "4420" 00:25:11.559 } 00:25:11.559 ], 00:25:11.559 "allow_any_host": true, 00:25:11.559 "hosts": [], 00:25:11.559 "serial_number": "SPDK00000000000001", 00:25:11.559 "model_number": "SPDK bdev Controller", 00:25:11.559 "max_namespaces": 2, 00:25:11.559 "min_cntlid": 1, 00:25:11.559 "max_cntlid": 65519, 00:25:11.559 "namespaces": [ 00:25:11.559 { 00:25:11.559 "nsid": 1, 00:25:11.559 "bdev_name": "Malloc0", 00:25:11.559 "name": "Malloc0", 00:25:11.559 "nguid": "C3DD3AED9C7C41F58F409EAB29F68787", 00:25:11.559 "uuid": "c3dd3aed-9c7c-41f5-8f40-9eab29f68787" 00:25:11.559 } 00:25:11.559 ] 00:25:11.559 } 00:25:11.559 ] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:11.559 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1632956 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:11.560 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:25:11.560 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 Malloc1 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 Asynchronous Event Request test 00:25:11.829 Attaching to 10.0.0.2 00:25:11.829 Attached to 10.0.0.2 00:25:11.829 Registering asynchronous event callbacks... 00:25:11.829 Starting namespace attribute notice tests for all controllers... 00:25:11.829 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:11.829 aer_cb - Changed Namespace 00:25:11.829 Cleaning up... 00:25:11.829 [ 00:25:11.829 { 00:25:11.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:11.829 "subtype": "Discovery", 00:25:11.829 "listen_addresses": [], 00:25:11.829 "allow_any_host": true, 00:25:11.829 "hosts": [] 00:25:11.829 }, 00:25:11.829 { 00:25:11.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.829 "subtype": "NVMe", 00:25:11.829 "listen_addresses": [ 00:25:11.829 { 00:25:11.829 "trtype": "TCP", 00:25:11.829 "adrfam": "IPv4", 00:25:11.829 "traddr": "10.0.0.2", 00:25:11.829 "trsvcid": "4420" 00:25:11.829 } 00:25:11.829 ], 00:25:11.829 "allow_any_host": true, 00:25:11.829 "hosts": [], 00:25:11.829 "serial_number": "SPDK00000000000001", 00:25:11.829 "model_number": "SPDK bdev Controller", 00:25:11.829 "max_namespaces": 2, 00:25:11.829 "min_cntlid": 1, 00:25:11.829 "max_cntlid": 65519, 00:25:11.829 "namespaces": [ 00:25:11.829 { 00:25:11.829 "nsid": 1, 00:25:11.829 "bdev_name": "Malloc0", 00:25:11.829 "name": "Malloc0", 00:25:11.829 "nguid": "C3DD3AED9C7C41F58F409EAB29F68787", 00:25:11.829 "uuid": "c3dd3aed-9c7c-41f5-8f40-9eab29f68787" 00:25:11.829 }, 00:25:11.829 { 00:25:11.829 "nsid": 2, 00:25:11.829 "bdev_name": "Malloc1", 00:25:11.829 "name": "Malloc1", 00:25:11.829 "nguid": "3C21B8BABFCA470EBF8CB9BBC4EAB75E", 00:25:11.829 "uuid": "3c21b8ba-bfca-470e-bf8c-b9bbc4eab75e" 00:25:11.829 } 00:25:11.829 ] 00:25:11.829 } 00:25:11.829 ] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1632956 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.829 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.830 rmmod nvme_tcp 00:25:11.830 rmmod nvme_fabrics 00:25:11.830 rmmod nvme_keyring 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1632666 ']' 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1632666 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1632666 ']' 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1632666 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:11.830 11:32:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632666 00:25:11.830 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:11.830 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:11.830 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632666' 00:25:11.830 killing process with pid 1632666 00:25:11.830 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1632666 00:25:11.830 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1632666 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.091 11:32:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.004 11:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.004 00:25:14.004 real 0m12.054s 00:25:14.004 user 0m7.965s 00:25:14.004 sys 0m6.509s 00:25:14.004 11:32:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:14.004 11:32:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:14.004 ************************************ 00:25:14.004 END TEST nvmf_aer 00:25:14.004 ************************************ 00:25:14.264 11:32:11 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:14.264 11:32:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:14.264 11:32:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:14.264 11:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.264 ************************************ 00:25:14.264 START TEST nvmf_async_init 00:25:14.264 ************************************ 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:14.264 * Looking for test storage... 00:25:14.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5c5a1db5ff1747d6b4cb50986cd1e9bb 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.264 11:32:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.420 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:22.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:22.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:22.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:22.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.421 11:32:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:25:22.421 00:25:22.421 --- 10.0.0.2 ping statistics --- 00:25:22.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.421 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:22.421 00:25:22.421 --- 10.0.0.1 ping statistics --- 00:25:22.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.421 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.421 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1637270 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1637270 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1637270 ']' 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:22.422 11:32:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.422 [2024-06-10 11:32:19.183122] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:22.422 [2024-06-10 11:32:19.183189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.422 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.422 [2024-06-10 11:32:19.275951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.422 [2024-06-10 11:32:19.367562] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.422 [2024-06-10 11:32:19.367620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.422 [2024-06-10 11:32:19.367628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.422 [2024-06-10 11:32:19.367634] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.422 [2024-06-10 11:32:19.367640] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.422 [2024-06-10 11:32:19.367665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.993 [2024-06-10 11:32:20.093905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.993 null0 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.993 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5c5a1db5ff1747d6b4cb50986cd1e9bb 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 [2024-06-10 11:32:20.154259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.994 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.254 nvme0n1 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.254 [ 00:25:23.254 { 00:25:23.254 "name": "nvme0n1", 00:25:23.254 "aliases": [ 00:25:23.254 "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb" 00:25:23.254 ], 00:25:23.254 "product_name": "NVMe disk", 00:25:23.254 "block_size": 512, 00:25:23.254 "num_blocks": 2097152, 00:25:23.254 "uuid": "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb", 00:25:23.254 "assigned_rate_limits": { 00:25:23.254 "rw_ios_per_sec": 0, 00:25:23.254 "rw_mbytes_per_sec": 0, 00:25:23.254 "r_mbytes_per_sec": 0, 00:25:23.254 "w_mbytes_per_sec": 0 00:25:23.254 }, 00:25:23.254 "claimed": false, 00:25:23.254 "zoned": false, 00:25:23.254 "supported_io_types": { 00:25:23.254 "read": true, 00:25:23.254 "write": true, 00:25:23.254 "unmap": false, 00:25:23.254 "write_zeroes": true, 00:25:23.254 "flush": true, 00:25:23.254 "reset": true, 00:25:23.254 "compare": true, 00:25:23.254 "compare_and_write": true, 00:25:23.254 "abort": true, 00:25:23.254 "nvme_admin": true, 00:25:23.254 "nvme_io": true 00:25:23.254 }, 00:25:23.254 "memory_domains": [ 00:25:23.254 { 00:25:23.254 "dma_device_id": "system", 00:25:23.254 "dma_device_type": 1 00:25:23.254 } 00:25:23.254 ], 00:25:23.254 "driver_specific": { 00:25:23.254 "nvme": [ 00:25:23.254 { 00:25:23.254 "trid": { 00:25:23.254 "trtype": "TCP", 00:25:23.254 "adrfam": "IPv4", 00:25:23.254 "traddr": "10.0.0.2", 00:25:23.254 "trsvcid": "4420", 00:25:23.254 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.254 }, 00:25:23.254 "ctrlr_data": { 00:25:23.254 "cntlid": 1, 00:25:23.254 "vendor_id": "0x8086", 00:25:23.254 "model_number": "SPDK bdev Controller", 00:25:23.254 "serial_number": "00000000000000000000", 00:25:23.254 "firmware_revision": "24.09", 00:25:23.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.254 "oacs": { 00:25:23.254 "security": 0, 00:25:23.254 "format": 0, 00:25:23.254 "firmware": 0, 00:25:23.254 "ns_manage": 0 00:25:23.254 }, 00:25:23.254 "multi_ctrlr": true, 00:25:23.254 "ana_reporting": false 00:25:23.254 }, 00:25:23.254 "vs": { 00:25:23.254 "nvme_version": "1.3" 00:25:23.254 }, 00:25:23.254 "ns_data": { 00:25:23.254 "id": 1, 00:25:23.254 "can_share": true 00:25:23.254 } 00:25:23.254 } 00:25:23.254 ], 00:25:23.254 "mp_policy": "active_passive" 00:25:23.254 } 00:25:23.254 } 00:25:23.254 ] 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.254 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.254 [2024-06-10 11:32:20.424281] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.255 [2024-06-10 11:32:20.424356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239b560 (9): Bad file descriptor 00:25:23.516 [2024-06-10 11:32:20.555913] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.516 [ 00:25:23.516 { 00:25:23.516 "name": "nvme0n1", 00:25:23.516 "aliases": [ 00:25:23.516 "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb" 00:25:23.516 ], 00:25:23.516 "product_name": "NVMe disk", 00:25:23.516 "block_size": 512, 00:25:23.516 "num_blocks": 2097152, 00:25:23.516 "uuid": "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb", 00:25:23.516 "assigned_rate_limits": { 00:25:23.516 "rw_ios_per_sec": 0, 00:25:23.516 "rw_mbytes_per_sec": 0, 00:25:23.516 "r_mbytes_per_sec": 0, 00:25:23.516 "w_mbytes_per_sec": 0 00:25:23.516 }, 00:25:23.516 "claimed": false, 00:25:23.516 "zoned": false, 00:25:23.516 "supported_io_types": { 00:25:23.516 "read": true, 00:25:23.516 "write": true, 00:25:23.516 "unmap": false, 00:25:23.516 "write_zeroes": true, 00:25:23.516 "flush": true, 00:25:23.516 "reset": true, 00:25:23.516 "compare": true, 00:25:23.516 "compare_and_write": true, 00:25:23.516 "abort": true, 00:25:23.516 "nvme_admin": true, 00:25:23.516 "nvme_io": true 00:25:23.516 }, 00:25:23.516 "memory_domains": [ 00:25:23.516 { 00:25:23.516 "dma_device_id": "system", 00:25:23.516 "dma_device_type": 1 00:25:23.516 } 00:25:23.516 ], 00:25:23.516 "driver_specific": { 00:25:23.516 "nvme": [ 00:25:23.516 { 00:25:23.516 "trid": { 00:25:23.516 "trtype": "TCP", 00:25:23.516 "adrfam": "IPv4", 00:25:23.516 "traddr": "10.0.0.2", 00:25:23.516 "trsvcid": "4420", 00:25:23.516 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.516 }, 00:25:23.516 "ctrlr_data": { 00:25:23.516 "cntlid": 2, 00:25:23.516 "vendor_id": "0x8086", 00:25:23.516 "model_number": "SPDK bdev Controller", 00:25:23.516 "serial_number": "00000000000000000000", 00:25:23.516 "firmware_revision": "24.09", 00:25:23.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.516 "oacs": { 00:25:23.516 "security": 0, 00:25:23.516 "format": 0, 00:25:23.516 "firmware": 0, 00:25:23.516 "ns_manage": 0 00:25:23.516 }, 00:25:23.516 "multi_ctrlr": true, 00:25:23.516 "ana_reporting": false 00:25:23.516 }, 00:25:23.516 "vs": { 00:25:23.516 "nvme_version": "1.3" 00:25:23.516 }, 00:25:23.516 "ns_data": { 00:25:23.516 "id": 1, 00:25:23.516 "can_share": true 00:25:23.516 } 00:25:23.516 } 00:25:23.516 ], 00:25:23.516 "mp_policy": "active_passive" 00:25:23.516 } 00:25:23.516 } 00:25:23.516 ] 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.llgnuDj2vJ 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.llgnuDj2vJ 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:23.516 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.517 [2024-06-10 11:32:20.628908] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:23.517 [2024-06-10 11:32:20.629058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.llgnuDj2vJ 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.517 [2024-06-10 11:32:20.640936] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.llgnuDj2vJ 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.517 [2024-06-10 11:32:20.652964] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.517 [2024-06-10 11:32:20.653010] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:23.517 nvme0n1 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.517 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.517 [ 00:25:23.517 { 00:25:23.517 "name": "nvme0n1", 00:25:23.517 "aliases": [ 00:25:23.517 "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb" 00:25:23.517 ], 00:25:23.517 "product_name": "NVMe disk", 00:25:23.517 "block_size": 512, 00:25:23.517 "num_blocks": 2097152, 00:25:23.517 "uuid": "5c5a1db5-ff17-47d6-b4cb-50986cd1e9bb", 00:25:23.517 "assigned_rate_limits": { 00:25:23.517 "rw_ios_per_sec": 0, 00:25:23.517 "rw_mbytes_per_sec": 0, 00:25:23.517 "r_mbytes_per_sec": 0, 00:25:23.517 "w_mbytes_per_sec": 0 00:25:23.517 }, 00:25:23.517 "claimed": false, 00:25:23.517 "zoned": false, 00:25:23.517 "supported_io_types": { 00:25:23.517 "read": true, 00:25:23.517 "write": true, 00:25:23.517 "unmap": false, 00:25:23.517 "write_zeroes": true, 00:25:23.517 "flush": true, 00:25:23.517 "reset": true, 00:25:23.517 "compare": true, 00:25:23.517 "compare_and_write": true, 00:25:23.517 "abort": true, 00:25:23.517 "nvme_admin": true, 00:25:23.517 "nvme_io": true 00:25:23.517 }, 00:25:23.517 "memory_domains": [ 00:25:23.517 { 00:25:23.517 "dma_device_id": "system", 00:25:23.517 "dma_device_type": 1 00:25:23.517 } 00:25:23.517 ], 00:25:23.517 "driver_specific": { 00:25:23.517 "nvme": [ 00:25:23.517 { 00:25:23.517 "trid": { 00:25:23.517 "trtype": "TCP", 00:25:23.517 "adrfam": "IPv4", 00:25:23.517 "traddr": "10.0.0.2", 00:25:23.517 "trsvcid": "4421", 00:25:23.517 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.517 }, 00:25:23.517 "ctrlr_data": { 00:25:23.517 "cntlid": 3, 00:25:23.517 "vendor_id": "0x8086", 00:25:23.517 "model_number": "SPDK bdev Controller", 00:25:23.517 "serial_number": "00000000000000000000", 00:25:23.517 "firmware_revision": "24.09", 00:25:23.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.517 "oacs": { 00:25:23.517 "security": 0, 00:25:23.778 "format": 0, 00:25:23.778 "firmware": 0, 00:25:23.778 "ns_manage": 0 00:25:23.778 }, 00:25:23.778 "multi_ctrlr": true, 00:25:23.778 "ana_reporting": false 00:25:23.778 }, 00:25:23.778 "vs": { 00:25:23.778 "nvme_version": "1.3" 00:25:23.778 }, 00:25:23.778 "ns_data": { 00:25:23.778 "id": 1, 00:25:23.778 "can_share": true 00:25:23.778 } 00:25:23.778 } 00:25:23.778 ], 00:25:23.778 "mp_policy": "active_passive" 00:25:23.778 } 00:25:23.778 } 00:25:23.778 ] 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.llgnuDj2vJ 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.778 rmmod nvme_tcp 00:25:23.778 rmmod nvme_fabrics 00:25:23.778 rmmod nvme_keyring 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1637270 ']' 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1637270 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1637270 ']' 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1637270 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1637270 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1637270' 00:25:23.778 killing process with pid 1637270 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1637270 00:25:23.778 [2024-06-10 11:32:20.896981] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:23.778 [2024-06-10 11:32:20.897021] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:23.778 11:32:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1637270 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.039 11:32:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.949 11:32:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.949 00:25:25.949 real 0m11.836s 00:25:25.949 user 0m4.085s 00:25:25.949 sys 0m6.271s 00:25:25.949 11:32:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:25.949 11:32:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:25.949 ************************************ 00:25:25.949 END TEST nvmf_async_init 00:25:25.949 ************************************ 00:25:26.210 11:32:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.210 ************************************ 00:25:26.210 START TEST dma 00:25:26.210 ************************************ 00:25:26.210 11:32:23 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:26.210 * Looking for test storage... 00:25:26.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.210 11:32:23 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.210 11:32:23 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.210 11:32:23 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.210 11:32:23 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.210 11:32:23 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.210 11:32:23 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.210 11:32:23 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.210 11:32:23 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:26.210 11:32:23 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.210 11:32:23 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.210 11:32:23 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:26.210 11:32:23 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:26.210 00:25:26.210 real 0m0.135s 00:25:26.210 user 0m0.062s 00:25:26.210 sys 0m0.080s 00:25:26.210 11:32:23 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:26.210 11:32:23 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:26.210 ************************************ 00:25:26.210 END TEST dma 00:25:26.210 ************************************ 00:25:26.210 11:32:23 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:26.210 11:32:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.471 ************************************ 00:25:26.471 START TEST nvmf_identify 00:25:26.471 ************************************ 00:25:26.471 11:32:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:26.471 * Looking for test storage... 00:25:26.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.472 11:32:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.619 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:25:34.620 00:25:34.620 --- 10.0.0.2 ping statistics --- 00:25:34.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.620 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:34.620 00:25:34.620 --- 10.0.0.1 ping statistics --- 00:25:34.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.620 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1642002 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1642002 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1642002 ']' 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:34.620 11:32:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:34.620 [2024-06-10 11:32:31.782657] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:34.620 [2024-06-10 11:32:31.782715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.620 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.881 [2024-06-10 11:32:31.874783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.881 [2024-06-10 11:32:31.969008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.881 [2024-06-10 11:32:31.969070] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.881 [2024-06-10 11:32:31.969079] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.881 [2024-06-10 11:32:31.969085] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.881 [2024-06-10 11:32:31.969091] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.881 [2024-06-10 11:32:31.969218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.881 [2024-06-10 11:32:31.969348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.881 [2024-06-10 11:32:31.969507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.881 [2024-06-10 11:32:31.969508] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.450 [2024-06-10 11:32:32.652377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:35.450 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 Malloc0 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 [2024-06-10 11:32:32.748692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.712 [ 00:25:35.712 { 00:25:35.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:35.712 "subtype": "Discovery", 00:25:35.712 "listen_addresses": [ 00:25:35.712 { 00:25:35.712 "trtype": "TCP", 00:25:35.712 "adrfam": "IPv4", 00:25:35.712 "traddr": "10.0.0.2", 00:25:35.712 "trsvcid": "4420" 00:25:35.712 } 00:25:35.712 ], 00:25:35.712 "allow_any_host": true, 00:25:35.712 "hosts": [] 00:25:35.712 }, 00:25:35.712 { 00:25:35.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.712 "subtype": "NVMe", 00:25:35.712 "listen_addresses": [ 00:25:35.712 { 00:25:35.712 "trtype": "TCP", 00:25:35.712 "adrfam": "IPv4", 00:25:35.712 "traddr": "10.0.0.2", 00:25:35.712 "trsvcid": "4420" 00:25:35.712 } 00:25:35.712 ], 00:25:35.712 "allow_any_host": true, 00:25:35.712 "hosts": [], 00:25:35.712 "serial_number": "SPDK00000000000001", 00:25:35.712 "model_number": "SPDK bdev Controller", 00:25:35.712 "max_namespaces": 32, 00:25:35.712 "min_cntlid": 1, 00:25:35.712 "max_cntlid": 65519, 00:25:35.712 "namespaces": [ 00:25:35.712 { 00:25:35.712 "nsid": 1, 00:25:35.712 "bdev_name": "Malloc0", 00:25:35.712 "name": "Malloc0", 00:25:35.712 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:35.712 "eui64": "ABCDEF0123456789", 00:25:35.712 "uuid": "87a97605-13ee-4b69-9a83-4f2c9341a8e8" 00:25:35.712 } 00:25:35.712 ] 00:25:35.712 } 00:25:35.712 ] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.712 11:32:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:35.712 [2024-06-10 11:32:32.810856] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:35.712 [2024-06-10 11:32:32.810929] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642310 ] 00:25:35.712 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.712 [2024-06-10 11:32:32.843636] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:35.712 [2024-06-10 11:32:32.843682] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:35.712 [2024-06-10 11:32:32.843687] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:35.712 [2024-06-10 11:32:32.843697] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:35.712 [2024-06-10 11:32:32.843705] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:35.713 [2024-06-10 11:32:32.844141] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:35.713 [2024-06-10 11:32:32.844167] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x197dec0 0 00:25:35.713 [2024-06-10 11:32:32.854834] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:35.713 [2024-06-10 11:32:32.854844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:35.713 [2024-06-10 11:32:32.854849] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:35.713 [2024-06-10 11:32:32.854852] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:35.713 [2024-06-10 11:32:32.854884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.854890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.854894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.854905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:35.713 [2024-06-10 11:32:32.854920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.862830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.862838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.862841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.862846] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.862857] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:35.713 [2024-06-10 11:32:32.862864] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:35.713 [2024-06-10 11:32:32.862872] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:35.713 [2024-06-10 11:32:32.862883] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.862887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.862890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.862897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.862909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.863123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.863129] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.863133] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.863142] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:35.713 [2024-06-10 11:32:32.863148] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:35.713 [2024-06-10 11:32:32.863154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863161] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.863167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.863177] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.863372] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.863378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.863381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.863390] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:35.713 [2024-06-10 11:32:32.863397] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.863403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.863416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.863425] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.863624] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.863630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.863633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.863642] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.863650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.863665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.863674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.863838] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.863844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.863847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.863855] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:35.713 [2024-06-10 11:32:32.863860] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.863866] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.863971] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:35.713 [2024-06-10 11:32:32.863975] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.863983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.863989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.863996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.864005] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.713 [2024-06-10 11:32:32.864175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.713 [2024-06-10 11:32:32.864181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.713 [2024-06-10 11:32:32.864184] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.864188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.713 [2024-06-10 11:32:32.864193] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:35.713 [2024-06-10 11:32:32.864201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.864204] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.713 [2024-06-10 11:32:32.864207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.713 [2024-06-10 11:32:32.864214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.713 [2024-06-10 11:32:32.864222] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.714 [2024-06-10 11:32:32.864427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.714 [2024-06-10 11:32:32.864433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.714 [2024-06-10 11:32:32.864436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.864440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.714 [2024-06-10 11:32:32.864445] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:35.714 [2024-06-10 11:32:32.864451] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.864458] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:35.714 [2024-06-10 11:32:32.864469] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.864477] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.864480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.864487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.714 [2024-06-10 11:32:32.864496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.714 [2024-06-10 11:32:32.864689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.714 [2024-06-10 11:32:32.864695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.714 [2024-06-10 11:32:32.864698] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.864702] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x197dec0): datao=0, datal=4096, cccid=0 00:25:35.714 [2024-06-10 11:32:32.864706] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a00df0) on tqpair(0x197dec0): expected_datao=0, payload_size=4096 00:25:35.714 [2024-06-10 11:32:32.864710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.864730] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.864734] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.909830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.714 [2024-06-10 11:32:32.909843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.714 [2024-06-10 11:32:32.909846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.909850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.714 [2024-06-10 11:32:32.909859] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:35.714 [2024-06-10 11:32:32.909863] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:35.714 [2024-06-10 11:32:32.909867] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:35.714 [2024-06-10 11:32:32.909872] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:35.714 [2024-06-10 11:32:32.909876] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:35.714 [2024-06-10 11:32:32.909880] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.909888] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.909897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.909902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.909905] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.909912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.714 [2024-06-10 11:32:32.909924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.714 [2024-06-10 11:32:32.910156] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.714 [2024-06-10 11:32:32.910162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.714 [2024-06-10 11:32:32.910165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a00df0) on tqpair=0x197dec0 00:25:35.714 [2024-06-10 11:32:32.910178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910182] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910185] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.910191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.714 [2024-06-10 11:32:32.910196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.910208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.714 [2024-06-10 11:32:32.910214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.910226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.714 [2024-06-10 11:32:32.910231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.910243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.714 [2024-06-10 11:32:32.910247] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.910254] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:35.714 [2024-06-10 11:32:32.910261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x197dec0) 00:25:35.714 [2024-06-10 11:32:32.910270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.714 [2024-06-10 11:32:32.910281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00df0, cid 0, qid 0 00:25:35.714 [2024-06-10 11:32:32.910286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a00f50, cid 1, qid 0 00:25:35.714 [2024-06-10 11:32:32.910290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a010b0, cid 2, qid 0 00:25:35.714 [2024-06-10 11:32:32.910295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01210, cid 3, qid 0 00:25:35.714 [2024-06-10 11:32:32.910299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01370, cid 4, qid 0 00:25:35.714 [2024-06-10 11:32:32.910517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.714 [2024-06-10 11:32:32.910523] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.714 [2024-06-10 11:32:32.910526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.714 [2024-06-10 11:32:32.910530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01370) on tqpair=0x197dec0 00:25:35.714 [2024-06-10 11:32:32.910537] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:35.714 [2024-06-10 11:32:32.910544] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:35.714 [2024-06-10 11:32:32.910554] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x197dec0) 00:25:35.715 [2024-06-10 11:32:32.910563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.715 [2024-06-10 11:32:32.910572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01370, cid 4, qid 0 00:25:35.715 [2024-06-10 11:32:32.910775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.715 [2024-06-10 11:32:32.910781] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.715 [2024-06-10 11:32:32.910784] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910787] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x197dec0): datao=0, datal=4096, cccid=4 00:25:35.715 [2024-06-10 11:32:32.910792] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a01370) on tqpair(0x197dec0): expected_datao=0, payload_size=4096 00:25:35.715 [2024-06-10 11:32:32.910796] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910802] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910806] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.715 [2024-06-10 11:32:32.910954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.715 [2024-06-10 11:32:32.910957] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01370) on tqpair=0x197dec0 00:25:35.715 [2024-06-10 11:32:32.910972] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:35.715 [2024-06-10 11:32:32.910992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.910996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x197dec0) 00:25:35.715 [2024-06-10 11:32:32.911002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.715 [2024-06-10 11:32:32.911009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x197dec0) 00:25:35.715 [2024-06-10 11:32:32.911021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.715 [2024-06-10 11:32:32.911035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01370, cid 4, qid 0 00:25:35.715 [2024-06-10 11:32:32.911040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a014d0, cid 5, qid 0 00:25:35.715 [2024-06-10 11:32:32.911277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.715 [2024-06-10 11:32:32.911282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.715 [2024-06-10 11:32:32.911286] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911289] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x197dec0): datao=0, datal=1024, cccid=4 00:25:35.715 [2024-06-10 11:32:32.911293] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a01370) on tqpair(0x197dec0): expected_datao=0, payload_size=1024 00:25:35.715 [2024-06-10 11:32:32.911297] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911303] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911308] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.715 [2024-06-10 11:32:32.911319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.715 [2024-06-10 11:32:32.911322] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.715 [2024-06-10 11:32:32.911325] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a014d0) on tqpair=0x197dec0 00:25:35.980 [2024-06-10 11:32:32.953019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.980 [2024-06-10 11:32:32.953030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.980 [2024-06-10 11:32:32.953033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.953037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01370) on tqpair=0x197dec0 00:25:35.980 [2024-06-10 11:32:32.953051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.953055] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x197dec0) 00:25:35.980 [2024-06-10 11:32:32.953062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.980 [2024-06-10 11:32:32.953075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01370, cid 4, qid 0 00:25:35.980 [2024-06-10 11:32:32.953253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.980 [2024-06-10 11:32:32.953259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.980 [2024-06-10 11:32:32.953262] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.953265] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x197dec0): datao=0, datal=3072, cccid=4 00:25:35.980 [2024-06-10 11:32:32.953269] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a01370) on tqpair(0x197dec0): expected_datao=0, payload_size=3072 00:25:35.980 [2024-06-10 11:32:32.953273] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.953315] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.953318] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.996828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.980 [2024-06-10 11:32:32.996837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.980 [2024-06-10 11:32:32.996840] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.996843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01370) on tqpair=0x197dec0 00:25:35.980 [2024-06-10 11:32:32.996853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.996856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x197dec0) 00:25:35.980 [2024-06-10 11:32:32.996863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.980 [2024-06-10 11:32:32.996876] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01370, cid 4, qid 0 00:25:35.980 [2024-06-10 11:32:32.997086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.980 [2024-06-10 11:32:32.997092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.980 [2024-06-10 11:32:32.997095] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.997098] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x197dec0): datao=0, datal=8, cccid=4 00:25:35.980 [2024-06-10 11:32:32.997102] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a01370) on tqpair(0x197dec0): expected_datao=0, payload_size=8 00:25:35.980 [2024-06-10 11:32:32.997106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.997112] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:32.997116] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:33.038989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.980 [2024-06-10 11:32:33.038998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.980 [2024-06-10 11:32:33.039001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.980 [2024-06-10 11:32:33.039005] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01370) on tqpair=0x197dec0 00:25:35.980 ===================================================== 00:25:35.980 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:35.980 ===================================================== 00:25:35.980 Controller Capabilities/Features 00:25:35.980 ================================ 00:25:35.980 Vendor ID: 0000 00:25:35.980 Subsystem Vendor ID: 0000 00:25:35.980 Serial Number: .................... 00:25:35.980 Model Number: ........................................ 00:25:35.980 Firmware Version: 24.09 00:25:35.980 Recommended Arb Burst: 0 00:25:35.980 IEEE OUI Identifier: 00 00 00 00:25:35.980 Multi-path I/O 00:25:35.980 May have multiple subsystem ports: No 00:25:35.980 May have multiple controllers: No 00:25:35.980 Associated with SR-IOV VF: No 00:25:35.980 Max Data Transfer Size: 131072 00:25:35.980 Max Number of Namespaces: 0 00:25:35.980 Max Number of I/O Queues: 1024 00:25:35.980 NVMe Specification Version (VS): 1.3 00:25:35.980 NVMe Specification Version (Identify): 1.3 00:25:35.980 Maximum Queue Entries: 128 00:25:35.980 Contiguous Queues Required: Yes 00:25:35.980 Arbitration Mechanisms Supported 00:25:35.980 Weighted Round Robin: Not Supported 00:25:35.980 Vendor Specific: Not Supported 00:25:35.980 Reset Timeout: 15000 ms 00:25:35.980 Doorbell Stride: 4 bytes 00:25:35.980 NVM Subsystem Reset: Not Supported 00:25:35.981 Command Sets Supported 00:25:35.981 NVM Command Set: Supported 00:25:35.981 Boot Partition: Not Supported 00:25:35.981 Memory Page Size Minimum: 4096 bytes 00:25:35.981 Memory Page Size Maximum: 4096 bytes 00:25:35.981 Persistent Memory Region: Not Supported 00:25:35.981 Optional Asynchronous Events Supported 00:25:35.981 Namespace Attribute Notices: Not Supported 00:25:35.981 Firmware Activation Notices: Not Supported 00:25:35.981 ANA Change Notices: Not Supported 00:25:35.981 PLE Aggregate Log Change Notices: Not Supported 00:25:35.981 LBA Status Info Alert Notices: Not Supported 00:25:35.981 EGE Aggregate Log Change Notices: Not Supported 00:25:35.981 Normal NVM Subsystem Shutdown event: Not Supported 00:25:35.981 Zone Descriptor Change Notices: Not Supported 00:25:35.981 Discovery Log Change Notices: Supported 00:25:35.981 Controller Attributes 00:25:35.981 128-bit Host Identifier: Not Supported 00:25:35.981 Non-Operational Permissive Mode: Not Supported 00:25:35.981 NVM Sets: Not Supported 00:25:35.981 Read Recovery Levels: Not Supported 00:25:35.981 Endurance Groups: Not Supported 00:25:35.981 Predictable Latency Mode: Not Supported 00:25:35.981 Traffic Based Keep ALive: Not Supported 00:25:35.981 Namespace Granularity: Not Supported 00:25:35.981 SQ Associations: Not Supported 00:25:35.981 UUID List: Not Supported 00:25:35.981 Multi-Domain Subsystem: Not Supported 00:25:35.981 Fixed Capacity Management: Not Supported 00:25:35.981 Variable Capacity Management: Not Supported 00:25:35.981 Delete Endurance Group: Not Supported 00:25:35.981 Delete NVM Set: Not Supported 00:25:35.981 Extended LBA Formats Supported: Not Supported 00:25:35.981 Flexible Data Placement Supported: Not Supported 00:25:35.981 00:25:35.981 Controller Memory Buffer Support 00:25:35.981 ================================ 00:25:35.981 Supported: No 00:25:35.981 00:25:35.981 Persistent Memory Region Support 00:25:35.981 ================================ 00:25:35.981 Supported: No 00:25:35.981 00:25:35.981 Admin Command Set Attributes 00:25:35.981 ============================ 00:25:35.981 Security Send/Receive: Not Supported 00:25:35.981 Format NVM: Not Supported 00:25:35.981 Firmware Activate/Download: Not Supported 00:25:35.981 Namespace Management: Not Supported 00:25:35.981 Device Self-Test: Not Supported 00:25:35.981 Directives: Not Supported 00:25:35.981 NVMe-MI: Not Supported 00:25:35.981 Virtualization Management: Not Supported 00:25:35.981 Doorbell Buffer Config: Not Supported 00:25:35.981 Get LBA Status Capability: Not Supported 00:25:35.981 Command & Feature Lockdown Capability: Not Supported 00:25:35.981 Abort Command Limit: 1 00:25:35.981 Async Event Request Limit: 4 00:25:35.981 Number of Firmware Slots: N/A 00:25:35.981 Firmware Slot 1 Read-Only: N/A 00:25:35.981 Firmware Activation Without Reset: N/A 00:25:35.981 Multiple Update Detection Support: N/A 00:25:35.981 Firmware Update Granularity: No Information Provided 00:25:35.981 Per-Namespace SMART Log: No 00:25:35.981 Asymmetric Namespace Access Log Page: Not Supported 00:25:35.981 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:35.981 Command Effects Log Page: Not Supported 00:25:35.981 Get Log Page Extended Data: Supported 00:25:35.981 Telemetry Log Pages: Not Supported 00:25:35.981 Persistent Event Log Pages: Not Supported 00:25:35.981 Supported Log Pages Log Page: May Support 00:25:35.981 Commands Supported & Effects Log Page: Not Supported 00:25:35.981 Feature Identifiers & Effects Log Page:May Support 00:25:35.981 NVMe-MI Commands & Effects Log Page: May Support 00:25:35.981 Data Area 4 for Telemetry Log: Not Supported 00:25:35.981 Error Log Page Entries Supported: 128 00:25:35.981 Keep Alive: Not Supported 00:25:35.981 00:25:35.981 NVM Command Set Attributes 00:25:35.981 ========================== 00:25:35.981 Submission Queue Entry Size 00:25:35.981 Max: 1 00:25:35.981 Min: 1 00:25:35.981 Completion Queue Entry Size 00:25:35.981 Max: 1 00:25:35.981 Min: 1 00:25:35.981 Number of Namespaces: 0 00:25:35.981 Compare Command: Not Supported 00:25:35.981 Write Uncorrectable Command: Not Supported 00:25:35.981 Dataset Management Command: Not Supported 00:25:35.981 Write Zeroes Command: Not Supported 00:25:35.981 Set Features Save Field: Not Supported 00:25:35.981 Reservations: Not Supported 00:25:35.981 Timestamp: Not Supported 00:25:35.981 Copy: Not Supported 00:25:35.981 Volatile Write Cache: Not Present 00:25:35.981 Atomic Write Unit (Normal): 1 00:25:35.981 Atomic Write Unit (PFail): 1 00:25:35.981 Atomic Compare & Write Unit: 1 00:25:35.981 Fused Compare & Write: Supported 00:25:35.981 Scatter-Gather List 00:25:35.981 SGL Command Set: Supported 00:25:35.981 SGL Keyed: Supported 00:25:35.981 SGL Bit Bucket Descriptor: Not Supported 00:25:35.981 SGL Metadata Pointer: Not Supported 00:25:35.981 Oversized SGL: Not Supported 00:25:35.981 SGL Metadata Address: Not Supported 00:25:35.981 SGL Offset: Supported 00:25:35.981 Transport SGL Data Block: Not Supported 00:25:35.981 Replay Protected Memory Block: Not Supported 00:25:35.981 00:25:35.981 Firmware Slot Information 00:25:35.981 ========================= 00:25:35.981 Active slot: 0 00:25:35.981 00:25:35.981 00:25:35.981 Error Log 00:25:35.981 ========= 00:25:35.981 00:25:35.981 Active Namespaces 00:25:35.981 ================= 00:25:35.981 Discovery Log Page 00:25:35.981 ================== 00:25:35.981 Generation Counter: 2 00:25:35.981 Number of Records: 2 00:25:35.981 Record Format: 0 00:25:35.981 00:25:35.981 Discovery Log Entry 0 00:25:35.981 ---------------------- 00:25:35.981 Transport Type: 3 (TCP) 00:25:35.981 Address Family: 1 (IPv4) 00:25:35.981 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:35.981 Entry Flags: 00:25:35.981 Duplicate Returned Information: 1 00:25:35.981 Explicit Persistent Connection Support for Discovery: 1 00:25:35.981 Transport Requirements: 00:25:35.981 Secure Channel: Not Required 00:25:35.981 Port ID: 0 (0x0000) 00:25:35.981 Controller ID: 65535 (0xffff) 00:25:35.981 Admin Max SQ Size: 128 00:25:35.981 Transport Service Identifier: 4420 00:25:35.981 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:35.981 Transport Address: 10.0.0.2 00:25:35.981 Discovery Log Entry 1 00:25:35.981 ---------------------- 00:25:35.981 Transport Type: 3 (TCP) 00:25:35.981 Address Family: 1 (IPv4) 00:25:35.981 Subsystem Type: 2 (NVM Subsystem) 00:25:35.981 Entry Flags: 00:25:35.981 Duplicate Returned Information: 0 00:25:35.981 Explicit Persistent Connection Support for Discovery: 0 00:25:35.981 Transport Requirements: 00:25:35.981 Secure Channel: Not Required 00:25:35.981 Port ID: 0 (0x0000) 00:25:35.981 Controller ID: 65535 (0xffff) 00:25:35.981 Admin Max SQ Size: 128 00:25:35.981 Transport Service Identifier: 4420 00:25:35.981 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:35.981 Transport Address: 10.0.0.2 [2024-06-10 11:32:33.039083] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:35.981 [2024-06-10 11:32:33.039096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.981 [2024-06-10 11:32:33.039102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.981 [2024-06-10 11:32:33.039108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.981 [2024-06-10 11:32:33.039113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.981 [2024-06-10 11:32:33.039121] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.981 [2024-06-10 11:32:33.039124] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.981 [2024-06-10 11:32:33.039128] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x197dec0) 00:25:35.982 [2024-06-10 11:32:33.039135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.039147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01210, cid 3, qid 0 00:25:35.982 [2024-06-10 11:32:33.039271] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.039277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.039280] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039284] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01210) on tqpair=0x197dec0 00:25:35.982 [2024-06-10 11:32:33.039291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x197dec0) 00:25:35.982 [2024-06-10 11:32:33.039304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.039315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01210, cid 3, qid 0 00:25:35.982 [2024-06-10 11:32:33.039520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.039526] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.039529] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039532] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01210) on tqpair=0x197dec0 00:25:35.982 [2024-06-10 11:32:33.039537] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:35.982 [2024-06-10 11:32:33.039544] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:35.982 [2024-06-10 11:32:33.039553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x197dec0) 00:25:35.982 [2024-06-10 11:32:33.039566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.039575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01210, cid 3, qid 0 00:25:35.982 [2024-06-10 11:32:33.039772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.039778] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.039781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039785] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01210) on tqpair=0x197dec0 00:25:35.982 [2024-06-10 11:32:33.039795] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.039802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x197dec0) 00:25:35.982 [2024-06-10 11:32:33.039808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.039817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a01210, cid 3, qid 0 00:25:35.982 [2024-06-10 11:32:33.043829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.043836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.043839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.043842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a01210) on tqpair=0x197dec0 00:25:35.982 [2024-06-10 11:32:33.043850] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:35.982 00:25:35.982 11:32:33 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:35.982 [2024-06-10 11:32:33.079845] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:35.982 [2024-06-10 11:32:33.079888] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642319 ] 00:25:35.982 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.982 [2024-06-10 11:32:33.112477] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:35.982 [2024-06-10 11:32:33.112514] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:35.982 [2024-06-10 11:32:33.112518] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:35.982 [2024-06-10 11:32:33.112529] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:35.982 [2024-06-10 11:32:33.112537] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:35.982 [2024-06-10 11:32:33.112939] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:35.982 [2024-06-10 11:32:33.112962] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fe5ec0 0 00:25:35.982 [2024-06-10 11:32:33.125830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:35.982 [2024-06-10 11:32:33.125840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:35.982 [2024-06-10 11:32:33.125845] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:35.982 [2024-06-10 11:32:33.125848] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:35.982 [2024-06-10 11:32:33.125877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.125882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.125886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.982 [2024-06-10 11:32:33.125899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:35.982 [2024-06-10 11:32:33.125914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.982 [2024-06-10 11:32:33.130831] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.130839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.130842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.130847] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.982 [2024-06-10 11:32:33.130858] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:35.982 [2024-06-10 11:32:33.130864] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:35.982 [2024-06-10 11:32:33.130869] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:35.982 [2024-06-10 11:32:33.130879] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.130882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.130886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.982 [2024-06-10 11:32:33.130892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.130904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.982 [2024-06-10 11:32:33.131023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.131030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.131033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.131037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.982 [2024-06-10 11:32:33.131042] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:35.982 [2024-06-10 11:32:33.131049] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:35.982 [2024-06-10 11:32:33.131055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.131059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.131062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.982 [2024-06-10 11:32:33.131068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.982 [2024-06-10 11:32:33.131078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.982 [2024-06-10 11:32:33.131270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.982 [2024-06-10 11:32:33.131276] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.982 [2024-06-10 11:32:33.131279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.982 [2024-06-10 11:32:33.131283] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.982 [2024-06-10 11:32:33.131288] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:35.982 [2024-06-10 11:32:33.131295] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:35.982 [2024-06-10 11:32:33.131302] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.131314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.983 [2024-06-10 11:32:33.131326] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.131522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.131528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.131531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131535] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.131540] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:35.983 [2024-06-10 11:32:33.131548] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131556] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.131562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.983 [2024-06-10 11:32:33.131571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.131756] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.131762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.131765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.131774] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:35.983 [2024-06-10 11:32:33.131778] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:35.983 [2024-06-10 11:32:33.131784] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:35.983 [2024-06-10 11:32:33.131890] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:35.983 [2024-06-10 11:32:33.131893] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:35.983 [2024-06-10 11:32:33.131900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.131907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.131913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.983 [2024-06-10 11:32:33.131922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.132043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.132049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.132052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.132060] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:35.983 [2024-06-10 11:32:33.132068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132075] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.132083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.983 [2024-06-10 11:32:33.132092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.132294] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.132299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.132303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.132311] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:35.983 [2024-06-10 11:32:33.132315] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:35.983 [2024-06-10 11:32:33.132321] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:35.983 [2024-06-10 11:32:33.132328] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:35.983 [2024-06-10 11:32:33.132336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.132345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.983 [2024-06-10 11:32:33.132354] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.132542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.983 [2024-06-10 11:32:33.132548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.983 [2024-06-10 11:32:33.132552] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132555] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=4096, cccid=0 00:25:35.983 [2024-06-10 11:32:33.132559] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2068df0) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=4096 00:25:35.983 [2024-06-10 11:32:33.132563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132583] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.132587] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.177828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.177838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.177841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.177845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.177853] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:35.983 [2024-06-10 11:32:33.177857] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:35.983 [2024-06-10 11:32:33.177861] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:35.983 [2024-06-10 11:32:33.177865] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:35.983 [2024-06-10 11:32:33.177869] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:35.983 [2024-06-10 11:32:33.177873] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:35.983 [2024-06-10 11:32:33.177881] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:35.983 [2024-06-10 11:32:33.177892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.177896] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.177900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.177907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.983 [2024-06-10 11:32:33.177918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.983 [2024-06-10 11:32:33.178148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.983 [2024-06-10 11:32:33.178153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.983 [2024-06-10 11:32:33.178157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.178160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2068df0) on tqpair=0x1fe5ec0 00:25:35.983 [2024-06-10 11:32:33.178169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.178172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.178175] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fe5ec0) 00:25:35.983 [2024-06-10 11:32:33.178181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.983 [2024-06-10 11:32:33.178187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.178190] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.983 [2024-06-10 11:32:33.178193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.984 [2024-06-10 11:32:33.178204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178207] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.984 [2024-06-10 11:32:33.178221] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.984 [2024-06-10 11:32:33.178237] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178244] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178250] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.984 [2024-06-10 11:32:33.178270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068df0, cid 0, qid 0 00:25:35.984 [2024-06-10 11:32:33.178274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2068f50, cid 1, qid 0 00:25:35.984 [2024-06-10 11:32:33.178279] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20690b0, cid 2, qid 0 00:25:35.984 [2024-06-10 11:32:33.178285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.984 [2024-06-10 11:32:33.178289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.984 [2024-06-10 11:32:33.178474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.984 [2024-06-10 11:32:33.178480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.984 [2024-06-10 11:32:33.178483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.984 [2024-06-10 11:32:33.178493] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:35.984 [2024-06-10 11:32:33.178498] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178505] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178510] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178516] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178520] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.984 [2024-06-10 11:32:33.178538] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.984 [2024-06-10 11:32:33.178711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.984 [2024-06-10 11:32:33.178717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.984 [2024-06-10 11:32:33.178720] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178724] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.984 [2024-06-10 11:32:33.178773] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178781] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.178788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.178791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.178797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.984 [2024-06-10 11:32:33.178806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.984 [2024-06-10 11:32:33.179026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.984 [2024-06-10 11:32:33.179034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.984 [2024-06-10 11:32:33.179038] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179042] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=4096, cccid=4 00:25:35.984 [2024-06-10 11:32:33.179047] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069370) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=4096 00:25:35.984 [2024-06-10 11:32:33.179052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179058] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179062] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179267] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.984 [2024-06-10 11:32:33.179273] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.984 [2024-06-10 11:32:33.179276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.984 [2024-06-10 11:32:33.179288] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:35.984 [2024-06-10 11:32:33.179301] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.179310] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.179316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179319] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.179325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.984 [2024-06-10 11:32:33.179335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.984 [2024-06-10 11:32:33.179492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.984 [2024-06-10 11:32:33.179498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.984 [2024-06-10 11:32:33.179501] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179505] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=4096, cccid=4 00:25:35.984 [2024-06-10 11:32:33.179508] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069370) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=4096 00:25:35.984 [2024-06-10 11:32:33.179512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179518] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179522] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.984 [2024-06-10 11:32:33.179675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.984 [2024-06-10 11:32:33.179678] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.984 [2024-06-10 11:32:33.179692] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.179700] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:35.984 [2024-06-10 11:32:33.179706] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.984 [2024-06-10 11:32:33.179716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.984 [2024-06-10 11:32:33.179725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.984 [2024-06-10 11:32:33.179895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.984 [2024-06-10 11:32:33.179903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.984 [2024-06-10 11:32:33.179906] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.984 [2024-06-10 11:32:33.179910] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=4096, cccid=4 00:25:35.984 [2024-06-10 11:32:33.179914] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069370) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=4096 00:25:35.984 [2024-06-10 11:32:33.179920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.179926] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.179929] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.180029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.180033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.180044] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180051] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180059] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180065] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180070] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180076] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:35.985 [2024-06-10 11:32:33.180080] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:35.985 [2024-06-10 11:32:33.180086] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:35.985 [2024-06-10 11:32:33.180100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.180111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.180117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180120] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180123] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.180129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.985 [2024-06-10 11:32:33.180141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.985 [2024-06-10 11:32:33.180146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20694d0, cid 5, qid 0 00:25:35.985 [2024-06-10 11:32:33.180317] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.180323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.180326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.180337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.180342] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.180345] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20694d0) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.180358] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.180371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.180380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20694d0, cid 5, qid 0 00:25:35.985 [2024-06-10 11:32:33.180569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.180574] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.180577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20694d0) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.180589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180593] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.180599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.180607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20694d0, cid 5, qid 0 00:25:35.985 [2024-06-10 11:32:33.180793] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.180798] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.180802] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180805] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20694d0) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.180814] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.180817] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.180827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.180836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20694d0, cid 5, qid 0 00:25:35.985 [2024-06-10 11:32:33.181022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.985 [2024-06-10 11:32:33.181028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.985 [2024-06-10 11:32:33.181031] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.181034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20694d0) on tqpair=0x1fe5ec0 00:25:35.985 [2024-06-10 11:32:33.181045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.181049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.181054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.181061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.181064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.181070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.181076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.181079] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.181085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.985 [2024-06-10 11:32:33.181091] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.985 [2024-06-10 11:32:33.181095] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fe5ec0) 00:25:35.985 [2024-06-10 11:32:33.181102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.986 [2024-06-10 11:32:33.181112] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20694d0, cid 5, qid 0 00:25:35.986 [2024-06-10 11:32:33.181117] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069370, cid 4, qid 0 00:25:35.986 [2024-06-10 11:32:33.181121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069630, cid 6, qid 0 00:25:35.986 [2024-06-10 11:32:33.181125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069790, cid 7, qid 0 00:25:35.986 [2024-06-10 11:32:33.181355] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.986 [2024-06-10 11:32:33.181361] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.986 [2024-06-10 11:32:33.181365] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181368] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=8192, cccid=5 00:25:35.986 [2024-06-10 11:32:33.181372] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20694d0) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=8192 00:25:35.986 [2024-06-10 11:32:33.181376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181440] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181444] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.986 [2024-06-10 11:32:33.181455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.986 [2024-06-10 11:32:33.181458] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181461] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=512, cccid=4 00:25:35.986 [2024-06-10 11:32:33.181465] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069370) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=512 00:25:35.986 [2024-06-10 11:32:33.181469] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181475] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181478] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.986 [2024-06-10 11:32:33.181488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.986 [2024-06-10 11:32:33.181491] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181494] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=512, cccid=6 00:25:35.986 [2024-06-10 11:32:33.181498] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069630) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=512 00:25:35.986 [2024-06-10 11:32:33.181502] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181508] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181511] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:35.986 [2024-06-10 11:32:33.181521] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:35.986 [2024-06-10 11:32:33.181525] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181528] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fe5ec0): datao=0, datal=4096, cccid=7 00:25:35.986 [2024-06-10 11:32:33.181532] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2069790) on tqpair(0x1fe5ec0): expected_datao=0, payload_size=4096 00:25:35.986 [2024-06-10 11:32:33.181535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181551] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181556] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.986 [2024-06-10 11:32:33.181752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.986 [2024-06-10 11:32:33.181755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20694d0) on tqpair=0x1fe5ec0 00:25:35.986 [2024-06-10 11:32:33.181770] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.986 [2024-06-10 11:32:33.181775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.986 [2024-06-10 11:32:33.181778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181782] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069370) on tqpair=0x1fe5ec0 00:25:35.986 [2024-06-10 11:32:33.181791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.986 [2024-06-10 11:32:33.181797] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.986 [2024-06-10 11:32:33.181801] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.181804] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069630) on tqpair=0x1fe5ec0 00:25:35.986 [2024-06-10 11:32:33.181813] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.986 [2024-06-10 11:32:33.181818] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.986 [2024-06-10 11:32:33.185826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.986 [2024-06-10 11:32:33.185831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069790) on tqpair=0x1fe5ec0 00:25:35.986 ===================================================== 00:25:35.986 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.986 ===================================================== 00:25:35.986 Controller Capabilities/Features 00:25:35.986 ================================ 00:25:35.986 Vendor ID: 8086 00:25:35.986 Subsystem Vendor ID: 8086 00:25:35.986 Serial Number: SPDK00000000000001 00:25:35.986 Model Number: SPDK bdev Controller 00:25:35.986 Firmware Version: 24.09 00:25:35.986 Recommended Arb Burst: 6 00:25:35.986 IEEE OUI Identifier: e4 d2 5c 00:25:35.986 Multi-path I/O 00:25:35.986 May have multiple subsystem ports: Yes 00:25:35.986 May have multiple controllers: Yes 00:25:35.986 Associated with SR-IOV VF: No 00:25:35.986 Max Data Transfer Size: 131072 00:25:35.986 Max Number of Namespaces: 32 00:25:35.986 Max Number of I/O Queues: 127 00:25:35.986 NVMe Specification Version (VS): 1.3 00:25:35.986 NVMe Specification Version (Identify): 1.3 00:25:35.986 Maximum Queue Entries: 128 00:25:35.986 Contiguous Queues Required: Yes 00:25:35.986 Arbitration Mechanisms Supported 00:25:35.986 Weighted Round Robin: Not Supported 00:25:35.986 Vendor Specific: Not Supported 00:25:35.986 Reset Timeout: 15000 ms 00:25:35.986 Doorbell Stride: 4 bytes 00:25:35.986 NVM Subsystem Reset: Not Supported 00:25:35.986 Command Sets Supported 00:25:35.986 NVM Command Set: Supported 00:25:35.986 Boot Partition: Not Supported 00:25:35.986 Memory Page Size Minimum: 4096 bytes 00:25:35.986 Memory Page Size Maximum: 4096 bytes 00:25:35.986 Persistent Memory Region: Not Supported 00:25:35.986 Optional Asynchronous Events Supported 00:25:35.986 Namespace Attribute Notices: Supported 00:25:35.986 Firmware Activation Notices: Not Supported 00:25:35.986 ANA Change Notices: Not Supported 00:25:35.986 PLE Aggregate Log Change Notices: Not Supported 00:25:35.986 LBA Status Info Alert Notices: Not Supported 00:25:35.986 EGE Aggregate Log Change Notices: Not Supported 00:25:35.986 Normal NVM Subsystem Shutdown event: Not Supported 00:25:35.986 Zone Descriptor Change Notices: Not Supported 00:25:35.986 Discovery Log Change Notices: Not Supported 00:25:35.986 Controller Attributes 00:25:35.986 128-bit Host Identifier: Supported 00:25:35.986 Non-Operational Permissive Mode: Not Supported 00:25:35.986 NVM Sets: Not Supported 00:25:35.986 Read Recovery Levels: Not Supported 00:25:35.986 Endurance Groups: Not Supported 00:25:35.986 Predictable Latency Mode: Not Supported 00:25:35.986 Traffic Based Keep ALive: Not Supported 00:25:35.986 Namespace Granularity: Not Supported 00:25:35.986 SQ Associations: Not Supported 00:25:35.986 UUID List: Not Supported 00:25:35.986 Multi-Domain Subsystem: Not Supported 00:25:35.986 Fixed Capacity Management: Not Supported 00:25:35.986 Variable Capacity Management: Not Supported 00:25:35.986 Delete Endurance Group: Not Supported 00:25:35.986 Delete NVM Set: Not Supported 00:25:35.986 Extended LBA Formats Supported: Not Supported 00:25:35.986 Flexible Data Placement Supported: Not Supported 00:25:35.986 00:25:35.986 Controller Memory Buffer Support 00:25:35.986 ================================ 00:25:35.986 Supported: No 00:25:35.986 00:25:35.986 Persistent Memory Region Support 00:25:35.986 ================================ 00:25:35.986 Supported: No 00:25:35.986 00:25:35.986 Admin Command Set Attributes 00:25:35.986 ============================ 00:25:35.986 Security Send/Receive: Not Supported 00:25:35.986 Format NVM: Not Supported 00:25:35.986 Firmware Activate/Download: Not Supported 00:25:35.986 Namespace Management: Not Supported 00:25:35.986 Device Self-Test: Not Supported 00:25:35.987 Directives: Not Supported 00:25:35.987 NVMe-MI: Not Supported 00:25:35.987 Virtualization Management: Not Supported 00:25:35.987 Doorbell Buffer Config: Not Supported 00:25:35.987 Get LBA Status Capability: Not Supported 00:25:35.987 Command & Feature Lockdown Capability: Not Supported 00:25:35.987 Abort Command Limit: 4 00:25:35.987 Async Event Request Limit: 4 00:25:35.987 Number of Firmware Slots: N/A 00:25:35.987 Firmware Slot 1 Read-Only: N/A 00:25:35.987 Firmware Activation Without Reset: N/A 00:25:35.987 Multiple Update Detection Support: N/A 00:25:35.987 Firmware Update Granularity: No Information Provided 00:25:35.987 Per-Namespace SMART Log: No 00:25:35.987 Asymmetric Namespace Access Log Page: Not Supported 00:25:35.987 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:35.987 Command Effects Log Page: Supported 00:25:35.987 Get Log Page Extended Data: Supported 00:25:35.987 Telemetry Log Pages: Not Supported 00:25:35.987 Persistent Event Log Pages: Not Supported 00:25:35.987 Supported Log Pages Log Page: May Support 00:25:35.987 Commands Supported & Effects Log Page: Not Supported 00:25:35.987 Feature Identifiers & Effects Log Page:May Support 00:25:35.987 NVMe-MI Commands & Effects Log Page: May Support 00:25:35.987 Data Area 4 for Telemetry Log: Not Supported 00:25:35.987 Error Log Page Entries Supported: 128 00:25:35.987 Keep Alive: Supported 00:25:35.987 Keep Alive Granularity: 10000 ms 00:25:35.987 00:25:35.987 NVM Command Set Attributes 00:25:35.987 ========================== 00:25:35.987 Submission Queue Entry Size 00:25:35.987 Max: 64 00:25:35.987 Min: 64 00:25:35.987 Completion Queue Entry Size 00:25:35.987 Max: 16 00:25:35.987 Min: 16 00:25:35.987 Number of Namespaces: 32 00:25:35.987 Compare Command: Supported 00:25:35.987 Write Uncorrectable Command: Not Supported 00:25:35.987 Dataset Management Command: Supported 00:25:35.987 Write Zeroes Command: Supported 00:25:35.987 Set Features Save Field: Not Supported 00:25:35.987 Reservations: Supported 00:25:35.987 Timestamp: Not Supported 00:25:35.987 Copy: Supported 00:25:35.987 Volatile Write Cache: Present 00:25:35.987 Atomic Write Unit (Normal): 1 00:25:35.987 Atomic Write Unit (PFail): 1 00:25:35.987 Atomic Compare & Write Unit: 1 00:25:35.987 Fused Compare & Write: Supported 00:25:35.987 Scatter-Gather List 00:25:35.987 SGL Command Set: Supported 00:25:35.987 SGL Keyed: Supported 00:25:35.987 SGL Bit Bucket Descriptor: Not Supported 00:25:35.987 SGL Metadata Pointer: Not Supported 00:25:35.987 Oversized SGL: Not Supported 00:25:35.987 SGL Metadata Address: Not Supported 00:25:35.987 SGL Offset: Supported 00:25:35.987 Transport SGL Data Block: Not Supported 00:25:35.987 Replay Protected Memory Block: Not Supported 00:25:35.987 00:25:35.987 Firmware Slot Information 00:25:35.987 ========================= 00:25:35.987 Active slot: 1 00:25:35.987 Slot 1 Firmware Revision: 24.09 00:25:35.987 00:25:35.987 00:25:35.987 Commands Supported and Effects 00:25:35.987 ============================== 00:25:35.987 Admin Commands 00:25:35.987 -------------- 00:25:35.987 Get Log Page (02h): Supported 00:25:35.987 Identify (06h): Supported 00:25:35.987 Abort (08h): Supported 00:25:35.987 Set Features (09h): Supported 00:25:35.987 Get Features (0Ah): Supported 00:25:35.987 Asynchronous Event Request (0Ch): Supported 00:25:35.987 Keep Alive (18h): Supported 00:25:35.987 I/O Commands 00:25:35.987 ------------ 00:25:35.987 Flush (00h): Supported LBA-Change 00:25:35.987 Write (01h): Supported LBA-Change 00:25:35.987 Read (02h): Supported 00:25:35.987 Compare (05h): Supported 00:25:35.987 Write Zeroes (08h): Supported LBA-Change 00:25:35.987 Dataset Management (09h): Supported LBA-Change 00:25:35.987 Copy (19h): Supported LBA-Change 00:25:35.987 Unknown (79h): Supported LBA-Change 00:25:35.987 Unknown (7Ah): Supported 00:25:35.987 00:25:35.987 Error Log 00:25:35.987 ========= 00:25:35.987 00:25:35.987 Arbitration 00:25:35.987 =========== 00:25:35.987 Arbitration Burst: 1 00:25:35.987 00:25:35.987 Power Management 00:25:35.987 ================ 00:25:35.987 Number of Power States: 1 00:25:35.987 Current Power State: Power State #0 00:25:35.987 Power State #0: 00:25:35.987 Max Power: 0.00 W 00:25:35.987 Non-Operational State: Operational 00:25:35.987 Entry Latency: Not Reported 00:25:35.987 Exit Latency: Not Reported 00:25:35.987 Relative Read Throughput: 0 00:25:35.987 Relative Read Latency: 0 00:25:35.987 Relative Write Throughput: 0 00:25:35.987 Relative Write Latency: 0 00:25:35.987 Idle Power: Not Reported 00:25:35.987 Active Power: Not Reported 00:25:35.987 Non-Operational Permissive Mode: Not Supported 00:25:35.987 00:25:35.987 Health Information 00:25:35.987 ================== 00:25:35.987 Critical Warnings: 00:25:35.987 Available Spare Space: OK 00:25:35.987 Temperature: OK 00:25:35.987 Device Reliability: OK 00:25:35.987 Read Only: No 00:25:35.987 Volatile Memory Backup: OK 00:25:35.987 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:35.987 Temperature Threshold: [2024-06-10 11:32:33.185924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.185930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fe5ec0) 00:25:35.987 [2024-06-10 11:32:33.185936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.987 [2024-06-10 11:32:33.185948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069790, cid 7, qid 0 00:25:35.987 [2024-06-10 11:32:33.186174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.987 [2024-06-10 11:32:33.186180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.987 [2024-06-10 11:32:33.186183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186186] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069790) on tqpair=0x1fe5ec0 00:25:35.987 [2024-06-10 11:32:33.186212] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:35.987 [2024-06-10 11:32:33.186222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.987 [2024-06-10 11:32:33.186228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.987 [2024-06-10 11:32:33.186234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.987 [2024-06-10 11:32:33.186239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.987 [2024-06-10 11:32:33.186246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186250] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.987 [2024-06-10 11:32:33.186260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.987 [2024-06-10 11:32:33.186270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.987 [2024-06-10 11:32:33.186475] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.987 [2024-06-10 11:32:33.186482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.987 [2024-06-10 11:32:33.186486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186489] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.987 [2024-06-10 11:32:33.186496] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186500] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.987 [2024-06-10 11:32:33.186509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.987 [2024-06-10 11:32:33.186522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.987 [2024-06-10 11:32:33.186725] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.987 [2024-06-10 11:32:33.186731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.987 [2024-06-10 11:32:33.186734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.987 [2024-06-10 11:32:33.186738] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.987 [2024-06-10 11:32:33.186743] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:35.988 [2024-06-10 11:32:33.186747] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:35.988 [2024-06-10 11:32:33.186755] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.186759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.186762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.186768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.186777] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.186975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.186981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.186984] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.186988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.186997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.187011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.187020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.187228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.187234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.187237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.187250] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187254] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187257] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.187263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.187274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.187431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.187436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.187439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.187452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187459] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.187465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.187474] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.187633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.187639] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.187642] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187645] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.187655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187658] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187661] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.187667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.187676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.187825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.187831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.187834] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.187847] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187851] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.187854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.187860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.187869] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.188135] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.188141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.188145] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.188158] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188164] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.188171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.188181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.188387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.188393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.188396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188399] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.188410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.188422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.188431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.188590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.188596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.188599] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188602] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.188612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.188625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.188633] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.188785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.188790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.188794] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.188807] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188813] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.988 [2024-06-10 11:32:33.188819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.988 [2024-06-10 11:32:33.188832] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.988 [2024-06-10 11:32:33.188941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.988 [2024-06-10 11:32:33.188947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.988 [2024-06-10 11:32:33.188950] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.988 [2024-06-10 11:32:33.188954] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.988 [2024-06-10 11:32:33.188963] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.188966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.188970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.188976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.188984] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.189143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.189148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.189152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.189165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189168] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.189177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.189186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.189346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.189352] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.189355] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189359] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.189368] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.189381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.189390] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.189541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.189547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.189550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189553] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.189563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.189575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.189584] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.189749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.189755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.189758] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.189770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.189777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.189783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.189792] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.193830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.193839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.193842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.193846] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.193856] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.193859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.193862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fe5ec0) 00:25:35.989 [2024-06-10 11:32:33.193868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.989 [2024-06-10 11:32:33.193878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2069210, cid 3, qid 0 00:25:35.989 [2024-06-10 11:32:33.194017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:35.989 [2024-06-10 11:32:33.194023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:35.989 [2024-06-10 11:32:33.194026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:35.989 [2024-06-10 11:32:33.194030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2069210) on tqpair=0x1fe5ec0 00:25:35.989 [2024-06-10 11:32:33.194038] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:36.250 0 Kelvin (-273 Celsius) 00:25:36.250 Available Spare: 0% 00:25:36.250 Available Spare Threshold: 0% 00:25:36.250 Life Percentage Used: 0% 00:25:36.250 Data Units Read: 0 00:25:36.250 Data Units Written: 0 00:25:36.250 Host Read Commands: 0 00:25:36.250 Host Write Commands: 0 00:25:36.250 Controller Busy Time: 0 minutes 00:25:36.250 Power Cycles: 0 00:25:36.250 Power On Hours: 0 hours 00:25:36.250 Unsafe Shutdowns: 0 00:25:36.250 Unrecoverable Media Errors: 0 00:25:36.250 Lifetime Error Log Entries: 0 00:25:36.250 Warning Temperature Time: 0 minutes 00:25:36.250 Critical Temperature Time: 0 minutes 00:25:36.250 00:25:36.250 Number of Queues 00:25:36.250 ================ 00:25:36.250 Number of I/O Submission Queues: 127 00:25:36.250 Number of I/O Completion Queues: 127 00:25:36.250 00:25:36.250 Active Namespaces 00:25:36.250 ================= 00:25:36.250 Namespace ID:1 00:25:36.250 Error Recovery Timeout: Unlimited 00:25:36.250 Command Set Identifier: NVM (00h) 00:25:36.250 Deallocate: Supported 00:25:36.250 Deallocated/Unwritten Error: Not Supported 00:25:36.250 Deallocated Read Value: Unknown 00:25:36.250 Deallocate in Write Zeroes: Not Supported 00:25:36.250 Deallocated Guard Field: 0xFFFF 00:25:36.250 Flush: Supported 00:25:36.250 Reservation: Supported 00:25:36.250 Namespace Sharing Capabilities: Multiple Controllers 00:25:36.250 Size (in LBAs): 131072 (0GiB) 00:25:36.250 Capacity (in LBAs): 131072 (0GiB) 00:25:36.250 Utilization (in LBAs): 131072 (0GiB) 00:25:36.250 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:36.250 EUI64: ABCDEF0123456789 00:25:36.250 UUID: 87a97605-13ee-4b69-9a83-4f2c9341a8e8 00:25:36.250 Thin Provisioning: Not Supported 00:25:36.250 Per-NS Atomic Units: Yes 00:25:36.250 Atomic Boundary Size (Normal): 0 00:25:36.250 Atomic Boundary Size (PFail): 0 00:25:36.250 Atomic Boundary Offset: 0 00:25:36.250 Maximum Single Source Range Length: 65535 00:25:36.250 Maximum Copy Length: 65535 00:25:36.250 Maximum Source Range Count: 1 00:25:36.250 NGUID/EUI64 Never Reused: No 00:25:36.250 Namespace Write Protected: No 00:25:36.250 Number of LBA Formats: 1 00:25:36.250 Current LBA Format: LBA Format #00 00:25:36.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:36.250 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.250 rmmod nvme_tcp 00:25:36.250 rmmod nvme_fabrics 00:25:36.250 rmmod nvme_keyring 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1642002 ']' 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1642002 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1642002 ']' 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1642002 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1642002 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1642002' 00:25:36.250 killing process with pid 1642002 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1642002 00:25:36.250 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1642002 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.511 11:32:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.423 11:32:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.423 00:25:38.423 real 0m12.108s 00:25:38.423 user 0m8.420s 00:25:38.423 sys 0m6.526s 00:25:38.423 11:32:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:38.423 11:32:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:38.423 ************************************ 00:25:38.423 END TEST nvmf_identify 00:25:38.423 ************************************ 00:25:38.423 11:32:35 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:38.423 11:32:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:38.423 11:32:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:38.423 11:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.423 ************************************ 00:25:38.423 START TEST nvmf_perf 00:25:38.423 ************************************ 00:25:38.423 11:32:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:38.684 * Looking for test storage... 00:25:38.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.684 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.685 11:32:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.830 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.830 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:25:46.831 00:25:46.831 --- 10.0.0.2 ping statistics --- 00:25:46.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.831 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:46.831 00:25:46.831 --- 10.0.0.1 ping statistics --- 00:25:46.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.831 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1646722 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1646722 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1646722 ']' 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:46.831 11:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:46.831 [2024-06-10 11:32:43.929586] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:25:46.831 [2024-06-10 11:32:43.929641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.831 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.831 [2024-06-10 11:32:44.018466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.092 [2024-06-10 11:32:44.081899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.092 [2024-06-10 11:32:44.081936] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.092 [2024-06-10 11:32:44.081944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.092 [2024-06-10 11:32:44.081950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.092 [2024-06-10 11:32:44.081955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.092 [2024-06-10 11:32:44.082070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.092 [2024-06-10 11:32:44.082183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.092 [2024-06-10 11:32:44.082333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.092 [2024-06-10 11:32:44.082334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:47.733 11:32:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:51.037 11:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:51.037 11:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:51.037 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:51.298 [2024-06-10 11:32:48.422999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.298 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.558 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:51.558 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.819 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:51.819 11:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:52.078 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.078 [2024-06-10 11:32:49.222435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.078 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:52.339 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:52.339 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:52.339 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:52.339 11:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:53.725 Initializing NVMe Controllers 00:25:53.725 Attached to NVMe Controller at 0000:65:00.0 [8086:0a54] 00:25:53.725 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:53.725 Initialization complete. Launching workers. 00:25:53.725 ======================================================== 00:25:53.725 Latency(us) 00:25:53.725 Device Information : IOPS MiB/s Average min max 00:25:53.725 PCIE (0000:65:00.0) NSID 1 from core 0: 86713.86 338.73 368.48 41.18 6248.60 00:25:53.725 ======================================================== 00:25:53.725 Total : 86713.86 338.73 368.48 41.18 6248.60 00:25:53.725 00:25:53.725 11:32:50 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.725 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.113 Initializing NVMe Controllers 00:25:55.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:55.113 Initialization complete. Launching workers. 00:25:55.113 ======================================================== 00:25:55.113 Latency(us) 00:25:55.113 Device Information : IOPS MiB/s Average min max 00:25:55.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 85.00 0.33 11924.52 267.31 45996.23 00:25:55.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17613.07 7957.71 47889.44 00:25:55.113 ======================================================== 00:25:55.113 Total : 143.00 0.56 14231.77 267.31 47889.44 00:25:55.113 00:25:55.113 11:32:52 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.113 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.498 Initializing NVMe Controllers 00:25:56.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:56.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:56.498 Initialization complete. Launching workers. 00:25:56.498 ======================================================== 00:25:56.498 Latency(us) 00:25:56.498 Device Information : IOPS MiB/s Average min max 00:25:56.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9244.22 36.11 3463.52 501.73 6927.30 00:25:56.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3868.68 15.11 8316.36 7028.34 16027.45 00:25:56.498 ======================================================== 00:25:56.498 Total : 13112.90 51.22 4895.25 501.73 16027.45 00:25:56.498 00:25:56.498 11:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:56.498 11:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:56.498 11:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:56.498 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.037 Initializing NVMe Controllers 00:25:59.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.037 Controller IO queue size 128, less than required. 00:25:59.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.037 Controller IO queue size 128, less than required. 00:25:59.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:59.037 Initialization complete. Launching workers. 00:25:59.037 ======================================================== 00:25:59.037 Latency(us) 00:25:59.037 Device Information : IOPS MiB/s Average min max 00:25:59.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1414.47 353.62 92187.69 57479.96 144122.07 00:25:59.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.49 150.87 220504.34 107954.93 312525.45 00:25:59.037 ======================================================== 00:25:59.037 Total : 2017.95 504.49 130561.87 57479.96 312525.45 00:25:59.037 00:25:59.037 11:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:59.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.037 No valid NVMe controllers or AIO or URING devices found 00:25:59.037 Initializing NVMe Controllers 00:25:59.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.037 Controller IO queue size 128, less than required. 00:25:59.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.037 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:59.037 Controller IO queue size 128, less than required. 00:25:59.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.037 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:59.037 WARNING: Some requested NVMe devices were skipped 00:25:59.037 11:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:59.297 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.839 Initializing NVMe Controllers 00:26:01.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.839 Controller IO queue size 128, less than required. 00:26:01.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.839 Controller IO queue size 128, less than required. 00:26:01.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:01.839 Initialization complete. Launching workers. 00:26:01.839 00:26:01.839 ==================== 00:26:01.839 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:01.839 TCP transport: 00:26:01.839 polls: 32191 00:26:01.839 idle_polls: 13684 00:26:01.839 sock_completions: 18507 00:26:01.839 nvme_completions: 5627 00:26:01.839 submitted_requests: 8410 00:26:01.839 queued_requests: 1 00:26:01.839 00:26:01.839 ==================== 00:26:01.839 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:01.839 TCP transport: 00:26:01.839 polls: 25770 00:26:01.839 idle_polls: 8494 00:26:01.839 sock_completions: 17276 00:26:01.839 nvme_completions: 5891 00:26:01.839 submitted_requests: 8868 00:26:01.839 queued_requests: 1 00:26:01.839 ======================================================== 00:26:01.839 Latency(us) 00:26:01.839 Device Information : IOPS MiB/s Average min max 00:26:01.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1406.43 351.61 94027.53 45033.53 149525.32 00:26:01.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1472.42 368.11 88156.95 36063.35 129697.56 00:26:01.839 ======================================================== 00:26:01.839 Total : 2878.85 719.71 91024.95 36063.35 149525.32 00:26:01.839 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.839 rmmod nvme_tcp 00:26:01.839 rmmod nvme_fabrics 00:26:01.839 rmmod nvme_keyring 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1646722 ']' 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1646722 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1646722 ']' 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1646722 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:01.839 11:32:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1646722 00:26:01.839 11:32:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:01.839 11:32:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:01.839 11:32:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1646722' 00:26:01.839 killing process with pid 1646722 00:26:01.839 11:32:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1646722 00:26:01.839 11:32:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1646722 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.387 11:33:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.301 11:33:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.301 00:26:06.301 real 0m27.892s 00:26:06.301 user 1m10.837s 00:26:06.301 sys 0m8.693s 00:26:06.301 11:33:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:06.301 11:33:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:06.301 ************************************ 00:26:06.301 END TEST nvmf_perf 00:26:06.301 ************************************ 00:26:06.563 11:33:03 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:06.563 11:33:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:06.563 11:33:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:06.563 11:33:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 ************************************ 00:26:06.563 START TEST nvmf_fio_host 00:26:06.563 ************************************ 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:06.563 * Looking for test storage... 00:26:06.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.563 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.564 11:33:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:14.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:14.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:14.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:14.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.710 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.972 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.972 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.972 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:14.972 11:33:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:14.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:26:14.972 00:26:14.972 --- 10.0.0.2 ping statistics --- 00:26:14.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.972 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:26:14.972 00:26:14.972 --- 10.0.0.1 ping statistics --- 00:26:14.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.972 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1654656 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1654656 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1654656 ']' 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:14.972 11:33:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.234 [2024-06-10 11:33:12.223113] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:26:15.234 [2024-06-10 11:33:12.223173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.234 [2024-06-10 11:33:12.315151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.234 [2024-06-10 11:33:12.408376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.234 [2024-06-10 11:33:12.408434] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.234 [2024-06-10 11:33:12.408442] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.234 [2024-06-10 11:33:12.408449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.234 [2024-06-10 11:33:12.408454] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.234 [2024-06-10 11:33:12.408577] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.234 [2024-06-10 11:33:12.408713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.234 [2024-06-10 11:33:12.408879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.234 [2024-06-10 11:33:12.408924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:16.175 [2024-06-10 11:33:13.262931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.175 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:16.435 Malloc1 00:26:16.435 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:16.696 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:16.956 11:33:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.956 [2024-06-10 11:33:14.117565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.956 11:33:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:17.217 11:33:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:17.785 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:17.785 fio-3.35 00:26:17.785 Starting 1 thread 00:26:17.785 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.331 00:26:20.331 test: (groupid=0, jobs=1): err= 0: pid=1655150: Mon Jun 10 11:33:17 2024 00:26:20.331 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.5MiB/2005msec) 00:26:20.331 slat (nsec): min=1912, max=274472, avg=2043.55, stdev=2649.32 00:26:20.331 clat (usec): min=3691, max=11725, avg=6647.46, stdev=502.57 00:26:20.331 lat (usec): min=3725, max=11727, avg=6649.50, stdev=502.52 00:26:20.331 clat percentiles (usec): 00:26:20.331 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:26:20.331 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:26:20.331 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 00:26:20.331 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[10421], 99.95th=[11076], 00:26:20.331 | 99.99th=[11731] 00:26:20.331 bw ( KiB/s): min=41976, max=43080, per=99.85%, avg=42604.00, stdev=487.34, samples=4 00:26:20.331 iops : min=10494, max=10770, avg=10651.00, stdev=121.84, samples=4 00:26:20.331 write: IOPS=10.7k, BW=41.6MiB/s (43.6MB/s)(83.4MiB/2005msec); 0 zone resets 00:26:20.331 slat (nsec): min=1967, max=262445, avg=2139.40, stdev=2002.64 00:26:20.331 clat (usec): min=2892, max=9963, avg=5327.72, stdev=403.05 00:26:20.331 lat (usec): min=2909, max=9965, avg=5329.86, stdev=403.10 00:26:20.331 clat percentiles (usec): 00:26:20.331 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5014], 00:26:20.331 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5407], 00:26:20.331 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5932], 00:26:20.331 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7439], 99.95th=[ 8586], 00:26:20.331 | 99.99th=[ 9896] 00:26:20.331 bw ( KiB/s): min=42472, max=42816, per=100.00%, avg=42626.00, stdev=165.42, samples=4 00:26:20.331 iops : min=10618, max=10704, avg=10656.50, stdev=41.36, samples=4 00:26:20.331 lat (msec) : 4=0.15%, 10=99.78%, 20=0.06% 00:26:20.331 cpu : usr=71.76%, sys=26.20%, ctx=62, majf=0, minf=44 00:26:20.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:20.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:20.331 issued rwts: total=21388,21354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:20.331 00:26:20.331 Run status group 0 (all jobs): 00:26:20.331 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.5MiB (87.6MB), run=2005-2005msec 00:26:20.331 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.4MiB (87.5MB), run=2005-2005msec 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:20.331 11:33:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:20.331 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:20.331 fio-3.35 00:26:20.331 Starting 1 thread 00:26:20.331 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.936 00:26:22.936 test: (groupid=0, jobs=1): err= 0: pid=1655886: Mon Jun 10 11:33:19 2024 00:26:22.936 read: IOPS=9935, BW=155MiB/s (163MB/s)(311MiB/2004msec) 00:26:22.936 slat (usec): min=3, max=100, avg= 3.39, stdev= 1.49 00:26:22.936 clat (usec): min=1631, max=14867, avg=7765.95, stdev=1954.83 00:26:22.936 lat (usec): min=1634, max=14884, avg=7769.34, stdev=1955.02 00:26:22.936 clat percentiles (usec): 00:26:22.936 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 5997], 00:26:22.936 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8291], 00:26:22.936 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[11076], 00:26:22.936 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14353], 99.95th=[14484], 00:26:22.936 | 99.99th=[14746] 00:26:22.936 bw ( KiB/s): min=73344, max=86432, per=49.72%, avg=79032.00, stdev=5533.40, samples=4 00:26:22.936 iops : min= 4584, max= 5402, avg=4939.50, stdev=345.84, samples=4 00:26:22.936 write: IOPS=5691, BW=88.9MiB/s (93.2MB/s)(161MiB/1810msec); 0 zone resets 00:26:22.936 slat (usec): min=36, max=443, avg=38.13, stdev= 8.74 00:26:22.936 clat (usec): min=3134, max=16890, avg=8947.45, stdev=1490.98 00:26:22.936 lat (usec): min=3171, max=17029, avg=8985.58, stdev=1493.66 00:26:22.936 clat percentiles (usec): 00:26:22.936 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7701], 00:26:22.936 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:26:22.936 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:26:22.936 | 99.00th=[12780], 99.50th=[13698], 99.90th=[16581], 99.95th=[16712], 00:26:22.936 | 99.99th=[16909] 00:26:22.936 bw ( KiB/s): min=75808, max=89728, per=90.22%, avg=82152.00, stdev=5951.85, samples=4 00:26:22.936 iops : min= 4738, max= 5608, avg=5134.50, stdev=371.99, samples=4 00:26:22.936 lat (msec) : 2=0.05%, 4=0.57%, 10=84.08%, 20=15.30% 00:26:22.936 cpu : usr=84.52%, sys=13.78%, ctx=17, majf=0, minf=65 00:26:22.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:22.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:22.936 issued rwts: total=19910,10301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:22.936 00:26:22.936 Run status group 0 (all jobs): 00:26:22.936 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=311MiB (326MB), run=2004-2004msec 00:26:22.936 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=161MiB (169MB), run=1810-1810msec 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:22.936 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.937 rmmod nvme_tcp 00:26:22.937 rmmod nvme_fabrics 00:26:22.937 rmmod nvme_keyring 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1654656 ']' 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1654656 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1654656 ']' 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1654656 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:22.937 11:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1654656 00:26:22.937 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:22.937 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:22.937 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1654656' 00:26:22.937 killing process with pid 1654656 00:26:22.937 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1654656 00:26:22.937 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1654656 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.198 11:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.116 11:33:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.116 00:26:25.116 real 0m18.621s 00:26:25.116 user 0m57.089s 00:26:25.116 sys 0m8.212s 00:26:25.116 11:33:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:25.116 11:33:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.116 ************************************ 00:26:25.116 END TEST nvmf_fio_host 00:26:25.116 ************************************ 00:26:25.116 11:33:22 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:25.116 11:33:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:25.116 11:33:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:25.116 11:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.116 ************************************ 00:26:25.116 START TEST nvmf_failover 00:26:25.116 ************************************ 00:26:25.116 11:33:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:25.377 * Looking for test storage... 00:26:25.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.377 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.378 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.378 11:33:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.378 11:33:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.521 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:26:33.522 00:26:33.522 --- 10.0.0.2 ping statistics --- 00:26:33.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.522 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:26:33.522 00:26:33.522 --- 10.0.0.1 ping statistics --- 00:26:33.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.522 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1660456 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1660456 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1660456 ']' 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:33.522 11:33:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:33.522 [2024-06-10 11:33:30.743044] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:26:33.522 [2024-06-10 11:33:30.743111] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.783 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.783 [2024-06-10 11:33:30.820861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.783 [2024-06-10 11:33:30.887386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.783 [2024-06-10 11:33:30.887424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.783 [2024-06-10 11:33:30.887431] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.783 [2024-06-10 11:33:30.887438] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.783 [2024-06-10 11:33:30.887443] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.783 [2024-06-10 11:33:30.887547] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.783 [2024-06-10 11:33:30.887697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.783 [2024-06-10 11:33:30.887698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:34.724 [2024-06-10 11:33:31.805916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.724 11:33:31 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:34.985 Malloc0 00:26:34.985 11:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.246 11:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.246 11:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.506 [2024-06-10 11:33:32.646256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.506 11:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:35.767 [2024-06-10 11:33:32.850789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:35.767 11:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:36.027 [2024-06-10 11:33:33.059439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1661018 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1661018 /var/tmp/bdevperf.sock 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1661018 ']' 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:36.027 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:36.287 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:36.287 11:33:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:36.287 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.548 NVMe0n1 00:26:36.548 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.809 00:26:36.809 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:36.809 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1661036 00:26:36.809 11:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:37.749 11:33:34 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.010 [2024-06-10 11:33:35.105801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 [2024-06-10 11:33:35.105985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d490 is same with the state(5) to be set 00:26:38.010 11:33:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:41.306 11:33:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.306 00:26:41.306 11:33:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:41.566 [2024-06-10 11:33:38.616310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 [2024-06-10 11:33:38.616401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66eb90 is same with the state(5) to be set 00:26:41.566 11:33:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:44.868 11:33:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.868 [2024-06-10 11:33:41.827970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.868 11:33:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:45.808 11:33:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:46.069 [2024-06-10 11:33:43.045224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 [2024-06-10 11:33:43.045345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f270 is same with the state(5) to be set 00:26:46.069 11:33:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1661036 00:26:52.667 0 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1661018 ']' 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1661018' 00:26:52.667 killing process with pid 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1661018 00:26:52.667 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:52.667 [2024-06-10 11:33:33.125700] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:26:52.667 [2024-06-10 11:33:33.125752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661018 ] 00:26:52.667 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.668 [2024-06-10 11:33:33.206947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.668 [2024-06-10 11:33:33.268102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.668 Running I/O for 15 seconds... 00:26:52.668 [2024-06-10 11:33:35.106598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.668 [2024-06-10 11:33:35.106630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.668 [2024-06-10 11:33:35.106647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.668 [2024-06-10 11:33:35.106662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.668 [2024-06-10 11:33:35.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1db60 is same with the state(5) to be set 00:26:52.668 [2024-06-10 11:33:35.106750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.668 [2024-06-10 11:33:35.106759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.106995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.668 [2024-06-10 11:33:35.107305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.668 [2024-06-10 11:33:35.107311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.669 [2024-06-10 11:33:35.107809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.669 [2024-06-10 11:33:35.107815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.107986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.107995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.670 [2024-06-10 11:33:35.108326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.670 [2024-06-10 11:33:35.108332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.671 [2024-06-10 11:33:35.108672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.671 [2024-06-10 11:33:35.108778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.671 [2024-06-10 11:33:35.108801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.671 [2024-06-10 11:33:35.108807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:26:52.671 [2024-06-10 11:33:35.108814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.671 [2024-06-10 11:33:35.108853] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3ec70 was disconnected and freed. reset controller. 00:26:52.671 [2024-06-10 11:33:35.108861] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:52.671 [2024-06-10 11:33:35.108869] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.671 [2024-06-10 11:33:35.112093] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.671 [2024-06-10 11:33:35.112113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1db60 (9): Bad file descriptor 00:26:52.672 [2024-06-10 11:33:35.181655] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:52.672 [2024-06-10 11:33:38.616653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.616990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.616999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.672 [2024-06-10 11:33:38.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.672 [2024-06-10 11:33:38.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.673 [2024-06-10 11:33:38.617502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.673 [2024-06-10 11:33:38.617601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.673 [2024-06-10 11:33:38.617608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.674 [2024-06-10 11:33:38.618122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.674 [2024-06-10 11:33:38.618129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.675 [2024-06-10 11:33:38.618496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.675 [2024-06-10 11:33:38.618511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.675 [2024-06-10 11:33:38.618614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.675 [2024-06-10 11:33:38.618622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe40e90 is same with the state(5) to be set 00:26:52.675 [2024-06-10 11:33:38.618634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.675 [2024-06-10 11:33:38.618639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.675 [2024-06-10 11:33:38.618647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125848 len:8 PRP1 0x0 PRP2 0x0 00:26:52.676 [2024-06-10 11:33:38.618654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:38.618689] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe40e90 was disconnected and freed. reset controller. 00:26:52.676 [2024-06-10 11:33:38.618697] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:52.676 [2024-06-10 11:33:38.618714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.676 [2024-06-10 11:33:38.618722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:38.618730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.676 [2024-06-10 11:33:38.618737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:38.618745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.676 [2024-06-10 11:33:38.618752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:38.618759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.676 [2024-06-10 11:33:38.618766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:38.618773] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.676 [2024-06-10 11:33:38.622045] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.676 [2024-06-10 11:33:38.622069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1db60 (9): Bad file descriptor 00:26:52.676 [2024-06-10 11:33:38.664959] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:52.676 [2024-06-10 11:33:43.045954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.045988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.676 [2024-06-10 11:33:43.046403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.676 [2024-06-10 11:33:43.046412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.677 [2024-06-10 11:33:43.046418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.677 [2024-06-10 11:33:43.046433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.677 [2024-06-10 11:33:43.046449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.677 [2024-06-10 11:33:43.046466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.677 [2024-06-10 11:33:43.046953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.677 [2024-06-10 11:33:43.046961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.046976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.046983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.046992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.046998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.678 [2024-06-10 11:33:43.047391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.678 [2024-06-10 11:33:43.047398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.679 [2024-06-10 11:33:43.047734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.679 [2024-06-10 11:33:43.047904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.679 [2024-06-10 11:33:43.047912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.680 [2024-06-10 11:33:43.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.047927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.680 [2024-06-10 11:33:43.047934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.047942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.680 [2024-06-10 11:33:43.047949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.047957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.680 [2024-06-10 11:33:43.047964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.047982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.680 [2024-06-10 11:33:43.047988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.680 [2024-06-10 11:33:43.047994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87480 len:8 PRP1 0x0 PRP2 0x0 00:26:52.680 [2024-06-10 11:33:43.048001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.048037] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe40c80 was disconnected and freed. reset controller. 00:26:52.680 [2024-06-10 11:33:43.048046] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:52.680 [2024-06-10 11:33:43.048062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.680 [2024-06-10 11:33:43.048070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.048080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.680 [2024-06-10 11:33:43.048087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.048095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.680 [2024-06-10 11:33:43.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.048108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.680 [2024-06-10 11:33:43.048115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.680 [2024-06-10 11:33:43.048122] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.680 [2024-06-10 11:33:43.051377] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.680 [2024-06-10 11:33:43.051401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1db60 (9): Bad file descriptor 00:26:52.680 [2024-06-10 11:33:43.123273] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:52.680 00:26:52.680 Latency(us) 00:26:52.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.680 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:52.680 Verification LBA range: start 0x0 length 0x4000 00:26:52.680 NVMe0n1 : 15.00 9826.01 38.38 480.75 0.00 12392.00 718.38 13208.02 00:26:52.680 =================================================================================================================== 00:26:52.680 Total : 9826.01 38.38 480.75 0.00 12392.00 718.38 13208.02 00:26:52.680 Received shutdown signal, test time was about 15.000000 seconds 00:26:52.680 00:26:52.680 Latency(us) 00:26:52.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.680 =================================================================================================================== 00:26:52.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1663586 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1663586 /var/tmp/bdevperf.sock 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1663586 ']' 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:52.680 11:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:52.979 11:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:52.979 11:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:52.979 11:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:53.240 [2024-06-10 11:33:50.361231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:53.240 11:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:53.501 [2024-06-10 11:33:50.561739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:53.501 11:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.073 NVMe0n1 00:26:54.073 11:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.334 00:26:54.334 11:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.595 00:26:54.595 11:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.595 11:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:54.856 11:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:55.116 11:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:58.414 11:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.414 11:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:58.414 11:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1664687 00:26:58.414 11:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1664687 00:26:58.414 11:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:59.358 0 00:26:59.358 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:59.358 [2024-06-10 11:33:49.333840] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:26:59.358 [2024-06-10 11:33:49.333899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663586 ] 00:26:59.358 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.358 [2024-06-10 11:33:49.413883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.358 [2024-06-10 11:33:49.475402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.358 [2024-06-10 11:33:52.132526] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:59.358 [2024-06-10 11:33:52.132569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.358 [2024-06-10 11:33:52.132579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.358 [2024-06-10 11:33:52.132588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.358 [2024-06-10 11:33:52.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.358 [2024-06-10 11:33:52.132602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.358 [2024-06-10 11:33:52.132609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.358 [2024-06-10 11:33:52.132616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.358 [2024-06-10 11:33:52.132623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.358 [2024-06-10 11:33:52.132629] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.358 [2024-06-10 11:33:52.132653] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.358 [2024-06-10 11:33:52.132668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19edb60 (9): Bad file descriptor 00:26:59.358 [2024-06-10 11:33:52.184778] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:59.358 Running I/O for 1 seconds... 00:26:59.358 00:26:59.358 Latency(us) 00:26:59.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.358 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:59.358 Verification LBA range: start 0x0 length 0x4000 00:26:59.358 NVMe0n1 : 1.01 10040.01 39.22 0.00 0.00 12695.00 2545.82 16232.76 00:26:59.358 =================================================================================================================== 00:26:59.358 Total : 10040.01 39.22 0.00 0.00 12695.00 2545.82 16232.76 00:26:59.358 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.358 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:59.618 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:59.879 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.880 11:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:59.880 11:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:00.140 11:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1663586 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1663586 ']' 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1663586 00:27:03.436 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1663586 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1663586' 00:27:03.437 killing process with pid 1663586 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1663586 00:27:03.437 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1663586 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.697 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.697 rmmod nvme_tcp 00:27:03.957 rmmod nvme_fabrics 00:27:03.957 rmmod nvme_keyring 00:27:03.957 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.957 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1660456 ']' 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1660456 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1660456 ']' 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1660456 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:03.958 11:34:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1660456 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1660456' 00:27:03.958 killing process with pid 1660456 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1660456 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1660456 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.958 11:34:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.496 11:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.496 00:27:06.496 real 0m40.921s 00:27:06.496 user 2m4.551s 00:27:06.496 sys 0m8.913s 00:27:06.496 11:34:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:06.496 11:34:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:06.496 ************************************ 00:27:06.496 END TEST nvmf_failover 00:27:06.496 ************************************ 00:27:06.496 11:34:03 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:06.496 11:34:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:06.496 11:34:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:06.496 11:34:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.496 ************************************ 00:27:06.496 START TEST nvmf_host_discovery 00:27:06.496 ************************************ 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:06.496 * Looking for test storage... 00:27:06.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.496 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.497 11:34:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.627 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:14.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:14.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:14.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:14.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:27:14.628 00:27:14.628 --- 10.0.0.2 ping statistics --- 00:27:14.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.628 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:27:14.628 00:27:14.628 --- 10.0.0.1 ping statistics --- 00:27:14.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.628 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1669831 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1669831 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1669831 ']' 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:14.628 11:34:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.628 [2024-06-10 11:34:11.842172] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:27:14.628 [2024-06-10 11:34:11.842243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.889 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.889 [2024-06-10 11:34:11.918277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.889 [2024-06-10 11:34:11.988637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.889 [2024-06-10 11:34:11.988677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.889 [2024-06-10 11:34:11.988685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.889 [2024-06-10 11:34:11.988691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.889 [2024-06-10 11:34:11.988696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.889 [2024-06-10 11:34:11.988718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.831 [2024-06-10 11:34:12.730952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.831 [2024-06-10 11:34:12.743146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.831 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.832 null0 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.832 null1 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1670124 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1670124 /tmp/host.sock 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1670124 ']' 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:15.832 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:15.832 11:34:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.832 [2024-06-10 11:34:12.830063] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:27:15.832 [2024-06-10 11:34:12.830108] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670124 ] 00:27:15.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.832 [2024-06-10 11:34:12.909972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.832 [2024-06-10 11:34:12.971141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:16.775 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.776 11:34:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 [2024-06-10 11:34:14.054515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:17.037 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:27:17.038 11:34:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:17.609 [2024-06-10 11:34:14.748979] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:17.609 [2024-06-10 11:34:14.748998] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:17.609 [2024-06-10 11:34:14.749010] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.869 [2024-06-10 11:34:14.837278] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:17.869 [2024-06-10 11:34:14.899548] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:17.869 [2024-06-10 11:34:14.899566] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:18.129 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:18.130 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:18.390 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:18.391 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.651 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.651 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:18.652 11:34:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.592 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.853 [2024-06-10 11:34:16.862087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:19.853 [2024-06-10 11:34:16.862884] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:19.853 [2024-06-10 11:34:16.862910] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:19.853 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.854 [2024-06-10 11:34:16.952170] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.854 11:34:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.854 11:34:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:19.854 11:34:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:19.854 [2024-06-10 11:34:17.057907] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:19.854 [2024-06-10 11:34:17.057923] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:19.854 [2024-06-10 11:34:17.057928] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 [2024-06-10 11:34:18.142087] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:21.244 [2024-06-10 11:34:18.142107] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:21.244 [2024-06-10 11:34:18.147951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.244 [2024-06-10 11:34:18.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.244 [2024-06-10 11:34:18.147977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.244 [2024-06-10 11:34:18.147984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.244 [2024-06-10 11:34:18.147991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.244 [2024-06-10 11:34:18.148005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.244 [2024-06-10 11:34:18.148013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.244 [2024-06-10 11:34:18.148020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.244 [2024-06-10 11:34:18.148026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.244 [2024-06-10 11:34:18.157964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.244 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 [2024-06-10 11:34:18.168003] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.244 [2024-06-10 11:34:18.168312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.244 [2024-06-10 11:34:18.168325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.244 [2024-06-10 11:34:18.168332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.244 [2024-06-10 11:34:18.168343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.244 [2024-06-10 11:34:18.168361] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.244 [2024-06-10 11:34:18.168367] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.244 [2024-06-10 11:34:18.168375] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.244 [2024-06-10 11:34:18.168385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.244 [2024-06-10 11:34:18.178054] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.244 [2024-06-10 11:34:18.178360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.244 [2024-06-10 11:34:18.178371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.178378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.178388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.178404] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.178410] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.178416] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.178426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 [2024-06-10 11:34:18.188103] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.188449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.188460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.188466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.188476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.188492] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.188499] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.188505] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.188515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 [2024-06-10 11:34:18.198155] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.198413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.198424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.198430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.198440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.198450] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.198456] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.198462] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.198472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.245 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.245 [2024-06-10 11:34:18.208203] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.208453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.208463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.208473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.208483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.208493] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.208499] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.208505] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.208514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 [2024-06-10 11:34:18.218252] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.218590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.218601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.218608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.218618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.218633] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.218639] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.218646] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.218655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 [2024-06-10 11:34:18.228302] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.228612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.228622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.228628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.228638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.228647] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.228653] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.228659] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.228669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.245 [2024-06-10 11:34:18.238350] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.245 [2024-06-10 11:34:18.238606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.245 [2024-06-10 11:34:18.238617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.245 [2024-06-10 11:34:18.238624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.245 [2024-06-10 11:34:18.238633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.245 [2024-06-10 11:34:18.238643] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.245 [2024-06-10 11:34:18.238649] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.245 [2024-06-10 11:34:18.238659] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.245 [2024-06-10 11:34:18.238669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.246 [2024-06-10 11:34:18.248400] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.246 [2024-06-10 11:34:18.248617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.246 [2024-06-10 11:34:18.248628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.246 [2024-06-10 11:34:18.248634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.246 [2024-06-10 11:34:18.248644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.246 [2024-06-10 11:34:18.248653] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.246 [2024-06-10 11:34:18.248658] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.246 [2024-06-10 11:34:18.248665] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.246 [2024-06-10 11:34:18.248674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:21.246 [2024-06-10 11:34:18.258448] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:21.246 [2024-06-10 11:34:18.258754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.246 [2024-06-10 11:34:18.258765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.246 [2024-06-10 11:34:18.258771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.246 [2024-06-10 11:34:18.258781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.246 [2024-06-10 11:34:18.258796] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.246 [2024-06-10 11:34:18.258802] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.246 [2024-06-10 11:34:18.258808] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.246 [2024-06-10 11:34:18.258817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:21.246 [2024-06-10 11:34:18.268496] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:21.246 [2024-06-10 11:34:18.268742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.246 [2024-06-10 11:34:18.268753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a120 with addr=10.0.0.2, port=4420 00:27:21.246 [2024-06-10 11:34:18.268759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a120 is same with the state(5) to be set 00:27:21.246 [2024-06-10 11:34:18.268769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a120 (9): Bad file descriptor 00:27:21.246 [2024-06-10 11:34:18.268779] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.246 [2024-06-10 11:34:18.268784] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.246 [2024-06-10 11:34:18.268791] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.246 [2024-06-10 11:34:18.268800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.246 [2024-06-10 11:34:18.271848] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:21.246 [2024-06-10 11:34:18.271864] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:21.246 11:34:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:22.196 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:22.197 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.527 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.528 11:34:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.471 [2024-06-10 11:34:20.645681] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:23.471 [2024-06-10 11:34:20.645701] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:23.471 [2024-06-10 11:34:20.645712] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:23.732 [2024-06-10 11:34:20.735002] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:23.732 [2024-06-10 11:34:20.838975] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:23.732 [2024-06-10 11:34:20.839006] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.732 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.733 request: 00:27:23.733 { 00:27:23.733 "name": "nvme", 00:27:23.733 "trtype": "tcp", 00:27:23.733 "traddr": "10.0.0.2", 00:27:23.733 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:23.733 "adrfam": "ipv4", 00:27:23.733 "trsvcid": "8009", 00:27:23.733 "wait_for_attach": true, 00:27:23.733 "method": "bdev_nvme_start_discovery", 00:27:23.733 "req_id": 1 00:27:23.733 } 00:27:23.733 Got JSON-RPC error response 00:27:23.733 response: 00:27:23.733 { 00:27:23.733 "code": -17, 00:27:23.733 "message": "File exists" 00:27:23.733 } 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.733 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.993 request: 00:27:23.993 { 00:27:23.993 "name": "nvme_second", 00:27:23.993 "trtype": "tcp", 00:27:23.993 "traddr": "10.0.0.2", 00:27:23.993 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:23.993 "adrfam": "ipv4", 00:27:23.993 "trsvcid": "8009", 00:27:23.993 "wait_for_attach": true, 00:27:23.993 "method": "bdev_nvme_start_discovery", 00:27:23.993 "req_id": 1 00:27:23.993 } 00:27:23.993 Got JSON-RPC error response 00:27:23.993 response: 00:27:23.993 { 00:27:23.993 "code": -17, 00:27:23.993 "message": "File exists" 00:27:23.993 } 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:23.993 11:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.993 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:23.994 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.994 11:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.933 [2024-06-10 11:34:22.106503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-06-10 11:34:22.106533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184fe10 with addr=10.0.0.2, port=8010 00:27:24.933 [2024-06-10 11:34:22.106547] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:24.933 [2024-06-10 11:34:22.106558] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:24.933 [2024-06-10 11:34:22.106565] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:26.315 [2024-06-10 11:34:23.108845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.315 [2024-06-10 11:34:23.108867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1850080 with addr=10.0.0.2, port=8010 00:27:26.315 [2024-06-10 11:34:23.108877] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:26.315 [2024-06-10 11:34:23.108883] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:26.315 [2024-06-10 11:34:23.108889] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:27.255 [2024-06-10 11:34:24.110838] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:27.255 request: 00:27:27.255 { 00:27:27.255 "name": "nvme_second", 00:27:27.255 "trtype": "tcp", 00:27:27.255 "traddr": "10.0.0.2", 00:27:27.255 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:27.255 "adrfam": "ipv4", 00:27:27.255 "trsvcid": "8010", 00:27:27.255 "attach_timeout_ms": 3000, 00:27:27.255 "method": "bdev_nvme_start_discovery", 00:27:27.255 "req_id": 1 00:27:27.255 } 00:27:27.255 Got JSON-RPC error response 00:27:27.255 response: 00:27:27.255 { 00:27:27.255 "code": -110, 00:27:27.255 "message": "Connection timed out" 00:27:27.255 } 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1670124 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.255 rmmod nvme_tcp 00:27:27.255 rmmod nvme_fabrics 00:27:27.255 rmmod nvme_keyring 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1669831 ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 1669831 ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1669831' 00:27:27.255 killing process with pid 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 1669831 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.255 11:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.795 00:27:29.795 real 0m23.198s 00:27:29.795 user 0m28.249s 00:27:29.795 sys 0m7.685s 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.795 ************************************ 00:27:29.795 END TEST nvmf_host_discovery 00:27:29.795 ************************************ 00:27:29.795 11:34:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:29.795 11:34:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:29.795 11:34:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:29.795 11:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:29.795 ************************************ 00:27:29.795 START TEST nvmf_host_multipath_status 00:27:29.795 ************************************ 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:29.795 * Looking for test storage... 00:27:29.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.795 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.796 11:34:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.936 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:37.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:37.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:37.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:37.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.937 11:34:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:27:37.937 00:27:37.937 --- 10.0.0.2 ping statistics --- 00:27:37.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.937 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:27:37.937 00:27:37.937 --- 10.0.0.1 ping statistics --- 00:27:37.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.937 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.937 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1676623 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1676623 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1676623 ']' 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:37.938 11:34:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:38.200 [2024-06-10 11:34:35.178887] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:27:38.200 [2024-06-10 11:34:35.178952] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.200 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.200 [2024-06-10 11:34:35.270449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:38.200 [2024-06-10 11:34:35.361802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.200 [2024-06-10 11:34:35.361865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.200 [2024-06-10 11:34:35.361873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.200 [2024-06-10 11:34:35.361880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.200 [2024-06-10 11:34:35.361885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.200 [2024-06-10 11:34:35.361951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.200 [2024-06-10 11:34:35.362023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1676623 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:39.142 [2024-06-10 11:34:36.257995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.142 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:39.403 Malloc0 00:27:39.403 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:39.663 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.664 11:34:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.925 [2024-06-10 11:34:37.056376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.925 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:40.186 [2024-06-10 11:34:37.248864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1676955 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1676955 /var/tmp/bdevperf.sock 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1676955 ']' 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:40.186 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.447 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:40.447 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:40.447 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:40.708 11:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:40.969 Nvme0n1 00:27:40.969 11:34:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:41.541 Nvme0n1 00:27:41.541 11:34:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:41.541 11:34:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:43.455 11:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:43.455 11:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:43.715 11:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:43.715 11:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:45.097 11:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:45.097 11:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:45.097 11:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.097 11:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.097 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.097 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:45.097 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.097 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.357 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:45.617 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.617 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:45.617 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.617 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:45.876 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.876 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:45.876 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.876 11:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:46.136 11:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.136 11:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:46.136 11:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:46.396 11:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:46.396 11:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.777 11:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.037 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.297 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.297 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:48.297 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.297 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.557 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.557 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:48.557 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.557 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.818 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.818 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:48.818 11:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.078 11:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:49.078 11:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:50.080 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:50.080 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:50.080 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.080 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.339 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.340 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:50.340 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.340 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.599 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.599 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.599 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.599 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.858 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.858 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.858 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.858 11:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.858 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.858 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:50.858 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.859 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.118 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.118 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:51.118 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.118 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.377 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.377 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:51.378 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:51.637 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:51.637 11:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:53.018 11:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:53.018 11:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.018 11:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.018 11:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.018 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.018 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:53.018 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.018 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.278 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.538 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.538 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:53.538 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.538 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.798 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.798 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:53.798 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.798 11:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:54.058 11:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:54.058 11:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:54.058 11:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:54.319 11:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:54.319 11:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:55.701 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:55.701 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:55.701 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.701 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.701 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.702 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:55.702 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.702 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:55.702 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.702 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:55.962 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.963 11:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.963 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.963 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.963 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.963 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.223 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.223 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:56.223 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.223 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.483 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.483 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:56.483 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.483 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:56.743 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.743 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:56.743 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:56.743 11:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:57.003 11:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:57.944 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:57.944 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:57.944 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.945 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.206 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.206 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:58.206 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.206 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.466 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.466 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.466 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.466 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.727 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.727 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.727 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.727 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.987 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.987 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:58.987 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.987 11:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.987 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.987 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:58.988 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.988 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.248 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.248 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:59.508 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:59.508 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:59.768 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:00.027 11:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:00.965 11:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:00.965 11:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:00.965 11:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.965 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.225 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.485 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.485 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.485 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.485 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.745 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.745 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.745 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.745 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:02.006 11:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:02.006 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:02.266 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:02.526 11:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:03.466 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:03.466 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:03.466 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.466 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.726 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.726 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:03.726 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.726 11:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.986 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.245 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.245 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.245 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.245 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:04.505 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:04.764 11:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:05.024 11:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:05.963 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:05.963 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:05.963 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.963 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:06.223 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.223 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:06.223 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.223 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:06.482 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.483 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:06.743 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.743 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:06.743 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.743 11:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:07.003 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.003 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:07.003 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.003 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:07.262 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.262 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:07.262 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:07.262 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:07.523 11:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:08.463 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:08.463 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.723 11:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:08.984 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.984 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:08.984 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.984 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:09.244 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.244 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:09.244 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.245 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.511 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1676955 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1676955 ']' 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1676955 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1676955 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1676955' 00:28:09.808 killing process with pid 1676955 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1676955 00:28:09.808 11:35:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1676955 00:28:10.089 Connection closed with partial response: 00:28:10.089 00:28:10.089 00:28:10.089 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1676955 00:28:10.089 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.089 [2024-06-10 11:34:37.308934] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:28:10.089 [2024-06-10 11:34:37.308991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676955 ] 00:28:10.089 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.089 [2024-06-10 11:34:37.363464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.089 [2024-06-10 11:34:37.415989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.089 Running I/O for 90 seconds... 00:28:10.089 [2024-06-10 11:34:51.296633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.089 [2024-06-10 11:34:51.296669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.296951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.296956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.298158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.298179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.298198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.298217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.089 [2024-06-10 11:34:51.298237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:10.089 [2024-06-10 11:34:51.298251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.090 [2024-06-10 11:34:51.298850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:10.090 [2024-06-10 11:34:51.298866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.091 [2024-06-10 11:34:51.299356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:10.091 [2024-06-10 11:34:51.299373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:34:51.299712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:34:51.299717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.092 [2024-06-10 11:35:04.652837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.652967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.652977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.653182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.653199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.653215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.653232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.092 [2024-06-10 11:35:04.653248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:10.092 [2024-06-10 11:35:04.653258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.653263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.653274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.653280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.653290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.654536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.654552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.093 [2024-06-10 11:35:04.654568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.654990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.093 [2024-06-10 11:35:04.655091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:10.093 [2024-06-10 11:35:04.655106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.094 [2024-06-10 11:35:04.655222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.094 [2024-06-10 11:35:04.655350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.094 [2024-06-10 11:35:04.655414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:10.094 [2024-06-10 11:35:04.655456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.094 [2024-06-10 11:35:04.655461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:10.094 Received shutdown signal, test time was about 28.301603 seconds 00:28:10.094 00:28:10.094 Latency(us) 00:28:10.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.094 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:10.094 Verification LBA range: start 0x0 length 0x4000 00:28:10.094 Nvme0n1 : 28.30 10039.19 39.22 0.00 0.00 12730.32 307.20 3019898.88 00:28:10.094 =================================================================================================================== 00:28:10.095 Total : 10039.19 39.22 0.00 0.00 12730.32 307.20 3019898.88 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.095 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.095 rmmod nvme_tcp 00:28:10.356 rmmod nvme_fabrics 00:28:10.356 rmmod nvme_keyring 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1676623 ']' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1676623 ']' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1676623' 00:28:10.356 killing process with pid 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1676623 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.356 11:35:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.899 11:35:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.899 00:28:12.899 real 0m43.029s 00:28:12.899 user 1m51.594s 00:28:12.899 sys 0m11.951s 00:28:12.899 11:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:12.899 11:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.899 ************************************ 00:28:12.899 END TEST nvmf_host_multipath_status 00:28:12.899 ************************************ 00:28:12.899 11:35:09 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:12.899 11:35:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:12.899 11:35:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:12.899 11:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.899 ************************************ 00:28:12.899 START TEST nvmf_discovery_remove_ifc 00:28:12.899 ************************************ 00:28:12.899 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:12.899 * Looking for test storage... 00:28:12.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.899 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.900 11:35:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.041 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:21.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:21.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:21.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:21.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.042 11:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.042 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:28:21.042 00:28:21.042 --- 10.0.0.2 ping statistics --- 00:28:21.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.042 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:28:21.303 00:28:21.303 --- 10.0.0.1 ping statistics --- 00:28:21.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.303 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.303 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1686512 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1686512 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1686512 ']' 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:21.304 11:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.304 [2024-06-10 11:35:18.367522] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:28:21.304 [2024-06-10 11:35:18.367616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.304 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.304 [2024-06-10 11:35:18.449291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.304 [2024-06-10 11:35:18.518252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.304 [2024-06-10 11:35:18.518290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.304 [2024-06-10 11:35:18.518297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.304 [2024-06-10 11:35:18.518303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.304 [2024-06-10 11:35:18.518308] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.304 [2024-06-10 11:35:18.518327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.245 [2024-06-10 11:35:19.264175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.245 [2024-06-10 11:35:19.272337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:22.245 null0 00:28:22.245 [2024-06-10 11:35:19.304328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1686774 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1686774 /tmp/host.sock 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1686774 ']' 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:22.245 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:22.245 11:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.245 [2024-06-10 11:35:19.374842] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:28:22.245 [2024-06-10 11:35:19.374885] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686774 ] 00:28:22.245 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.245 [2024-06-10 11:35:19.454740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.505 [2024-06-10 11:35:19.516059] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.075 11:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.457 [2024-06-10 11:35:21.353914] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:24.457 [2024-06-10 11:35:21.353935] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:24.457 [2024-06-10 11:35:21.353948] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:24.457 [2024-06-10 11:35:21.442211] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:24.457 [2024-06-10 11:35:21.669063] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:24.457 [2024-06-10 11:35:21.669111] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:24.457 [2024-06-10 11:35:21.669133] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:24.457 [2024-06-10 11:35:21.669149] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:24.457 [2024-06-10 11:35:21.669169] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:24.457 [2024-06-10 11:35:21.672290] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2037920 was disconnected and freed. delete nvme_qpair. 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.457 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:24.717 11:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:26.099 11:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.039 11:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.039 11:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:27.039 11:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:27.980 11:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:28.920 11:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:30.302 [2024-06-10 11:35:27.109595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:30.302 [2024-06-10 11:35:27.109631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.302 [2024-06-10 11:35:27.109642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-10 11:35:27.109651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.302 [2024-06-10 11:35:27.109658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-10 11:35:27.109665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.302 [2024-06-10 11:35:27.109672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-10 11:35:27.109679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.302 [2024-06-10 11:35:27.109690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-10 11:35:27.109697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.302 [2024-06-10 11:35:27.109704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-10 11:35:27.109710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffeca0 is same with the state(5) to be set 00:28:30.302 [2024-06-10 11:35:27.119615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffeca0 (9): Bad file descriptor 00:28:30.302 [2024-06-10 11:35:27.129654] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.302 11:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:31.240 [2024-06-10 11:35:28.192901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:31.240 [2024-06-10 11:35:28.192989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffeca0 with addr=10.0.0.2, port=4420 00:28:31.240 [2024-06-10 11:35:28.193020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffeca0 is same with the state(5) to be set 00:28:31.240 [2024-06-10 11:35:28.193073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffeca0 (9): Bad file descriptor 00:28:31.240 [2024-06-10 11:35:28.194079] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:31.240 [2024-06-10 11:35:28.194132] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:31.240 [2024-06-10 11:35:28.194153] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:31.240 [2024-06-10 11:35:28.194175] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:31.240 [2024-06-10 11:35:28.194234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.240 [2024-06-10 11:35:28.194259] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:31.240 11:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.240 11:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.240 11:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.181 [2024-06-10 11:35:29.196668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.181 [2024-06-10 11:35:29.196699] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:32.181 [2024-06-10 11:35:29.196720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.181 [2024-06-10 11:35:29.196729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.181 [2024-06-10 11:35:29.196738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.181 [2024-06-10 11:35:29.196750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.181 [2024-06-10 11:35:29.196757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.181 [2024-06-10 11:35:29.196764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.181 [2024-06-10 11:35:29.196771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.181 [2024-06-10 11:35:29.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.181 [2024-06-10 11:35:29.196785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.181 [2024-06-10 11:35:29.196792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.181 [2024-06-10 11:35:29.196799] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:32.181 [2024-06-10 11:35:29.197619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffe130 (9): Bad file descriptor 00:28:32.181 [2024-06-10 11:35:29.198629] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:32.181 [2024-06-10 11:35:29.198639] nvme_ctrlr.c:1203:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.181 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.442 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:32.442 11:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:33.384 11:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.324 [2024-06-10 11:35:31.249922] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:34.324 [2024-06-10 11:35:31.249938] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:34.324 [2024-06-10 11:35:31.249951] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:34.324 [2024-06-10 11:35:31.338239] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:34.325 [2024-06-10 11:35:31.439048] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:34.325 [2024-06-10 11:35:31.439084] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:34.325 [2024-06-10 11:35:31.439102] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:34.325 [2024-06-10 11:35:31.439116] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:34.325 [2024-06-10 11:35:31.439123] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:34.325 [2024-06-10 11:35:31.446689] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x200ca60 was disconnected and freed. delete nvme_qpair. 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1686774 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1686774 ']' 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1686774 00:28:34.325 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686774 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686774' 00:28:34.585 killing process with pid 1686774 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1686774 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1686774 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.585 rmmod nvme_tcp 00:28:34.585 rmmod nvme_fabrics 00:28:34.585 rmmod nvme_keyring 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1686512 ']' 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1686512 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1686512 ']' 00:28:34.585 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1686512 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686512 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686512' 00:28:34.845 killing process with pid 1686512 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1686512 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1686512 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.845 11:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.393 11:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.393 00:28:37.393 real 0m24.352s 00:28:37.393 user 0m27.925s 00:28:37.393 sys 0m7.572s 00:28:37.393 11:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:37.393 11:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.393 ************************************ 00:28:37.393 END TEST nvmf_discovery_remove_ifc 00:28:37.393 ************************************ 00:28:37.393 11:35:34 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:37.393 11:35:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:37.393 11:35:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:37.393 11:35:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.393 ************************************ 00:28:37.393 START TEST nvmf_identify_kernel_target 00:28:37.393 ************************************ 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:37.393 * Looking for test storage... 00:28:37.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.393 11:35:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:45.604 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:45.604 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.604 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:45.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:45.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:28:45.605 00:28:45.605 --- 10.0.0.2 ping statistics --- 00:28:45.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.605 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:28:45.605 00:28:45.605 --- 10.0.0.1 ping statistics --- 00:28:45.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.605 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:45.605 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:45.606 11:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:48.909 Waiting for block devices as requested 00:28:48.909 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:49.170 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:49.170 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:49.170 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:49.170 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:49.430 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:49.430 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:49.430 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:49.690 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:28:49.690 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:49.951 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:49.951 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:49.951 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:49.951 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:50.211 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:50.211 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:50.211 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:50.473 No valid GPT data, bailing 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:50.473 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:28:50.474 00:28:50.474 Discovery Log Number of Records 2, Generation counter 2 00:28:50.474 =====Discovery Log Entry 0====== 00:28:50.474 trtype: tcp 00:28:50.474 adrfam: ipv4 00:28:50.474 subtype: current discovery subsystem 00:28:50.474 treq: not specified, sq flow control disable supported 00:28:50.474 portid: 1 00:28:50.474 trsvcid: 4420 00:28:50.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:50.474 traddr: 10.0.0.1 00:28:50.474 eflags: none 00:28:50.474 sectype: none 00:28:50.474 =====Discovery Log Entry 1====== 00:28:50.474 trtype: tcp 00:28:50.474 adrfam: ipv4 00:28:50.474 subtype: nvme subsystem 00:28:50.474 treq: not specified, sq flow control disable supported 00:28:50.474 portid: 1 00:28:50.474 trsvcid: 4420 00:28:50.474 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:50.474 traddr: 10.0.0.1 00:28:50.474 eflags: none 00:28:50.474 sectype: none 00:28:50.474 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:50.474 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:50.474 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.474 ===================================================== 00:28:50.474 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:50.474 ===================================================== 00:28:50.474 Controller Capabilities/Features 00:28:50.474 ================================ 00:28:50.474 Vendor ID: 0000 00:28:50.474 Subsystem Vendor ID: 0000 00:28:50.474 Serial Number: 3d6800fcd08c81400eb2 00:28:50.474 Model Number: Linux 00:28:50.474 Firmware Version: 6.7.0-68 00:28:50.474 Recommended Arb Burst: 0 00:28:50.474 IEEE OUI Identifier: 00 00 00 00:28:50.474 Multi-path I/O 00:28:50.474 May have multiple subsystem ports: No 00:28:50.474 May have multiple controllers: No 00:28:50.474 Associated with SR-IOV VF: No 00:28:50.474 Max Data Transfer Size: Unlimited 00:28:50.474 Max Number of Namespaces: 0 00:28:50.474 Max Number of I/O Queues: 1024 00:28:50.474 NVMe Specification Version (VS): 1.3 00:28:50.474 NVMe Specification Version (Identify): 1.3 00:28:50.474 Maximum Queue Entries: 1024 00:28:50.474 Contiguous Queues Required: No 00:28:50.474 Arbitration Mechanisms Supported 00:28:50.474 Weighted Round Robin: Not Supported 00:28:50.474 Vendor Specific: Not Supported 00:28:50.474 Reset Timeout: 7500 ms 00:28:50.474 Doorbell Stride: 4 bytes 00:28:50.474 NVM Subsystem Reset: Not Supported 00:28:50.474 Command Sets Supported 00:28:50.474 NVM Command Set: Supported 00:28:50.474 Boot Partition: Not Supported 00:28:50.474 Memory Page Size Minimum: 4096 bytes 00:28:50.474 Memory Page Size Maximum: 4096 bytes 00:28:50.474 Persistent Memory Region: Not Supported 00:28:50.474 Optional Asynchronous Events Supported 00:28:50.474 Namespace Attribute Notices: Not Supported 00:28:50.474 Firmware Activation Notices: Not Supported 00:28:50.474 ANA Change Notices: Not Supported 00:28:50.474 PLE Aggregate Log Change Notices: Not Supported 00:28:50.474 LBA Status Info Alert Notices: Not Supported 00:28:50.474 EGE Aggregate Log Change Notices: Not Supported 00:28:50.474 Normal NVM Subsystem Shutdown event: Not Supported 00:28:50.474 Zone Descriptor Change Notices: Not Supported 00:28:50.474 Discovery Log Change Notices: Supported 00:28:50.474 Controller Attributes 00:28:50.474 128-bit Host Identifier: Not Supported 00:28:50.474 Non-Operational Permissive Mode: Not Supported 00:28:50.474 NVM Sets: Not Supported 00:28:50.474 Read Recovery Levels: Not Supported 00:28:50.474 Endurance Groups: Not Supported 00:28:50.474 Predictable Latency Mode: Not Supported 00:28:50.474 Traffic Based Keep ALive: Not Supported 00:28:50.474 Namespace Granularity: Not Supported 00:28:50.474 SQ Associations: Not Supported 00:28:50.474 UUID List: Not Supported 00:28:50.474 Multi-Domain Subsystem: Not Supported 00:28:50.474 Fixed Capacity Management: Not Supported 00:28:50.474 Variable Capacity Management: Not Supported 00:28:50.474 Delete Endurance Group: Not Supported 00:28:50.474 Delete NVM Set: Not Supported 00:28:50.474 Extended LBA Formats Supported: Not Supported 00:28:50.474 Flexible Data Placement Supported: Not Supported 00:28:50.474 00:28:50.474 Controller Memory Buffer Support 00:28:50.474 ================================ 00:28:50.474 Supported: No 00:28:50.474 00:28:50.474 Persistent Memory Region Support 00:28:50.474 ================================ 00:28:50.474 Supported: No 00:28:50.474 00:28:50.474 Admin Command Set Attributes 00:28:50.474 ============================ 00:28:50.474 Security Send/Receive: Not Supported 00:28:50.474 Format NVM: Not Supported 00:28:50.474 Firmware Activate/Download: Not Supported 00:28:50.474 Namespace Management: Not Supported 00:28:50.474 Device Self-Test: Not Supported 00:28:50.474 Directives: Not Supported 00:28:50.474 NVMe-MI: Not Supported 00:28:50.474 Virtualization Management: Not Supported 00:28:50.474 Doorbell Buffer Config: Not Supported 00:28:50.474 Get LBA Status Capability: Not Supported 00:28:50.474 Command & Feature Lockdown Capability: Not Supported 00:28:50.474 Abort Command Limit: 1 00:28:50.474 Async Event Request Limit: 1 00:28:50.474 Number of Firmware Slots: N/A 00:28:50.474 Firmware Slot 1 Read-Only: N/A 00:28:50.474 Firmware Activation Without Reset: N/A 00:28:50.474 Multiple Update Detection Support: N/A 00:28:50.474 Firmware Update Granularity: No Information Provided 00:28:50.474 Per-Namespace SMART Log: No 00:28:50.474 Asymmetric Namespace Access Log Page: Not Supported 00:28:50.474 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:50.474 Command Effects Log Page: Not Supported 00:28:50.474 Get Log Page Extended Data: Supported 00:28:50.474 Telemetry Log Pages: Not Supported 00:28:50.474 Persistent Event Log Pages: Not Supported 00:28:50.474 Supported Log Pages Log Page: May Support 00:28:50.474 Commands Supported & Effects Log Page: Not Supported 00:28:50.474 Feature Identifiers & Effects Log Page:May Support 00:28:50.474 NVMe-MI Commands & Effects Log Page: May Support 00:28:50.474 Data Area 4 for Telemetry Log: Not Supported 00:28:50.474 Error Log Page Entries Supported: 1 00:28:50.474 Keep Alive: Not Supported 00:28:50.474 00:28:50.474 NVM Command Set Attributes 00:28:50.474 ========================== 00:28:50.474 Submission Queue Entry Size 00:28:50.474 Max: 1 00:28:50.474 Min: 1 00:28:50.474 Completion Queue Entry Size 00:28:50.474 Max: 1 00:28:50.474 Min: 1 00:28:50.474 Number of Namespaces: 0 00:28:50.474 Compare Command: Not Supported 00:28:50.474 Write Uncorrectable Command: Not Supported 00:28:50.474 Dataset Management Command: Not Supported 00:28:50.474 Write Zeroes Command: Not Supported 00:28:50.474 Set Features Save Field: Not Supported 00:28:50.474 Reservations: Not Supported 00:28:50.474 Timestamp: Not Supported 00:28:50.474 Copy: Not Supported 00:28:50.474 Volatile Write Cache: Not Present 00:28:50.474 Atomic Write Unit (Normal): 1 00:28:50.474 Atomic Write Unit (PFail): 1 00:28:50.475 Atomic Compare & Write Unit: 1 00:28:50.475 Fused Compare & Write: Not Supported 00:28:50.475 Scatter-Gather List 00:28:50.475 SGL Command Set: Supported 00:28:50.475 SGL Keyed: Not Supported 00:28:50.475 SGL Bit Bucket Descriptor: Not Supported 00:28:50.475 SGL Metadata Pointer: Not Supported 00:28:50.475 Oversized SGL: Not Supported 00:28:50.475 SGL Metadata Address: Not Supported 00:28:50.475 SGL Offset: Supported 00:28:50.475 Transport SGL Data Block: Not Supported 00:28:50.475 Replay Protected Memory Block: Not Supported 00:28:50.475 00:28:50.475 Firmware Slot Information 00:28:50.475 ========================= 00:28:50.475 Active slot: 0 00:28:50.475 00:28:50.475 00:28:50.475 Error Log 00:28:50.475 ========= 00:28:50.475 00:28:50.475 Active Namespaces 00:28:50.475 ================= 00:28:50.475 Discovery Log Page 00:28:50.475 ================== 00:28:50.475 Generation Counter: 2 00:28:50.475 Number of Records: 2 00:28:50.475 Record Format: 0 00:28:50.475 00:28:50.475 Discovery Log Entry 0 00:28:50.475 ---------------------- 00:28:50.475 Transport Type: 3 (TCP) 00:28:50.475 Address Family: 1 (IPv4) 00:28:50.475 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:50.475 Entry Flags: 00:28:50.475 Duplicate Returned Information: 0 00:28:50.475 Explicit Persistent Connection Support for Discovery: 0 00:28:50.475 Transport Requirements: 00:28:50.475 Secure Channel: Not Specified 00:28:50.475 Port ID: 1 (0x0001) 00:28:50.475 Controller ID: 65535 (0xffff) 00:28:50.475 Admin Max SQ Size: 32 00:28:50.475 Transport Service Identifier: 4420 00:28:50.475 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:50.475 Transport Address: 10.0.0.1 00:28:50.475 Discovery Log Entry 1 00:28:50.475 ---------------------- 00:28:50.475 Transport Type: 3 (TCP) 00:28:50.475 Address Family: 1 (IPv4) 00:28:50.475 Subsystem Type: 2 (NVM Subsystem) 00:28:50.475 Entry Flags: 00:28:50.475 Duplicate Returned Information: 0 00:28:50.475 Explicit Persistent Connection Support for Discovery: 0 00:28:50.475 Transport Requirements: 00:28:50.475 Secure Channel: Not Specified 00:28:50.475 Port ID: 1 (0x0001) 00:28:50.475 Controller ID: 65535 (0xffff) 00:28:50.475 Admin Max SQ Size: 32 00:28:50.475 Transport Service Identifier: 4420 00:28:50.475 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:50.475 Transport Address: 10.0.0.1 00:28:50.475 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:50.475 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.737 get_feature(0x01) failed 00:28:50.737 get_feature(0x02) failed 00:28:50.737 get_feature(0x04) failed 00:28:50.737 ===================================================== 00:28:50.737 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.737 ===================================================== 00:28:50.737 Controller Capabilities/Features 00:28:50.737 ================================ 00:28:50.737 Vendor ID: 0000 00:28:50.737 Subsystem Vendor ID: 0000 00:28:50.737 Serial Number: 8c9a0030411afb7f96cf 00:28:50.737 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:50.737 Firmware Version: 6.7.0-68 00:28:50.737 Recommended Arb Burst: 6 00:28:50.737 IEEE OUI Identifier: 00 00 00 00:28:50.737 Multi-path I/O 00:28:50.738 May have multiple subsystem ports: Yes 00:28:50.738 May have multiple controllers: Yes 00:28:50.738 Associated with SR-IOV VF: No 00:28:50.738 Max Data Transfer Size: Unlimited 00:28:50.738 Max Number of Namespaces: 1024 00:28:50.738 Max Number of I/O Queues: 128 00:28:50.738 NVMe Specification Version (VS): 1.3 00:28:50.738 NVMe Specification Version (Identify): 1.3 00:28:50.738 Maximum Queue Entries: 1024 00:28:50.738 Contiguous Queues Required: No 00:28:50.738 Arbitration Mechanisms Supported 00:28:50.738 Weighted Round Robin: Not Supported 00:28:50.738 Vendor Specific: Not Supported 00:28:50.738 Reset Timeout: 7500 ms 00:28:50.738 Doorbell Stride: 4 bytes 00:28:50.738 NVM Subsystem Reset: Not Supported 00:28:50.738 Command Sets Supported 00:28:50.738 NVM Command Set: Supported 00:28:50.738 Boot Partition: Not Supported 00:28:50.738 Memory Page Size Minimum: 4096 bytes 00:28:50.738 Memory Page Size Maximum: 4096 bytes 00:28:50.738 Persistent Memory Region: Not Supported 00:28:50.738 Optional Asynchronous Events Supported 00:28:50.738 Namespace Attribute Notices: Supported 00:28:50.738 Firmware Activation Notices: Not Supported 00:28:50.738 ANA Change Notices: Supported 00:28:50.738 PLE Aggregate Log Change Notices: Not Supported 00:28:50.738 LBA Status Info Alert Notices: Not Supported 00:28:50.738 EGE Aggregate Log Change Notices: Not Supported 00:28:50.738 Normal NVM Subsystem Shutdown event: Not Supported 00:28:50.738 Zone Descriptor Change Notices: Not Supported 00:28:50.738 Discovery Log Change Notices: Not Supported 00:28:50.738 Controller Attributes 00:28:50.738 128-bit Host Identifier: Supported 00:28:50.738 Non-Operational Permissive Mode: Not Supported 00:28:50.738 NVM Sets: Not Supported 00:28:50.738 Read Recovery Levels: Not Supported 00:28:50.738 Endurance Groups: Not Supported 00:28:50.738 Predictable Latency Mode: Not Supported 00:28:50.738 Traffic Based Keep ALive: Supported 00:28:50.738 Namespace Granularity: Not Supported 00:28:50.738 SQ Associations: Not Supported 00:28:50.738 UUID List: Not Supported 00:28:50.738 Multi-Domain Subsystem: Not Supported 00:28:50.738 Fixed Capacity Management: Not Supported 00:28:50.738 Variable Capacity Management: Not Supported 00:28:50.738 Delete Endurance Group: Not Supported 00:28:50.738 Delete NVM Set: Not Supported 00:28:50.738 Extended LBA Formats Supported: Not Supported 00:28:50.738 Flexible Data Placement Supported: Not Supported 00:28:50.738 00:28:50.738 Controller Memory Buffer Support 00:28:50.738 ================================ 00:28:50.738 Supported: No 00:28:50.738 00:28:50.738 Persistent Memory Region Support 00:28:50.738 ================================ 00:28:50.738 Supported: No 00:28:50.738 00:28:50.738 Admin Command Set Attributes 00:28:50.738 ============================ 00:28:50.738 Security Send/Receive: Not Supported 00:28:50.738 Format NVM: Not Supported 00:28:50.738 Firmware Activate/Download: Not Supported 00:28:50.738 Namespace Management: Not Supported 00:28:50.738 Device Self-Test: Not Supported 00:28:50.738 Directives: Not Supported 00:28:50.738 NVMe-MI: Not Supported 00:28:50.738 Virtualization Management: Not Supported 00:28:50.738 Doorbell Buffer Config: Not Supported 00:28:50.738 Get LBA Status Capability: Not Supported 00:28:50.738 Command & Feature Lockdown Capability: Not Supported 00:28:50.738 Abort Command Limit: 4 00:28:50.738 Async Event Request Limit: 4 00:28:50.738 Number of Firmware Slots: N/A 00:28:50.738 Firmware Slot 1 Read-Only: N/A 00:28:50.738 Firmware Activation Without Reset: N/A 00:28:50.738 Multiple Update Detection Support: N/A 00:28:50.738 Firmware Update Granularity: No Information Provided 00:28:50.738 Per-Namespace SMART Log: Yes 00:28:50.738 Asymmetric Namespace Access Log Page: Supported 00:28:50.738 ANA Transition Time : 10 sec 00:28:50.738 00:28:50.738 Asymmetric Namespace Access Capabilities 00:28:50.738 ANA Optimized State : Supported 00:28:50.738 ANA Non-Optimized State : Supported 00:28:50.738 ANA Inaccessible State : Supported 00:28:50.738 ANA Persistent Loss State : Supported 00:28:50.738 ANA Change State : Supported 00:28:50.738 ANAGRPID is not changed : No 00:28:50.738 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:50.738 00:28:50.738 ANA Group Identifier Maximum : 128 00:28:50.738 Number of ANA Group Identifiers : 128 00:28:50.738 Max Number of Allowed Namespaces : 1024 00:28:50.738 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:50.738 Command Effects Log Page: Supported 00:28:50.738 Get Log Page Extended Data: Supported 00:28:50.738 Telemetry Log Pages: Not Supported 00:28:50.738 Persistent Event Log Pages: Not Supported 00:28:50.738 Supported Log Pages Log Page: May Support 00:28:50.738 Commands Supported & Effects Log Page: Not Supported 00:28:50.738 Feature Identifiers & Effects Log Page:May Support 00:28:50.738 NVMe-MI Commands & Effects Log Page: May Support 00:28:50.738 Data Area 4 for Telemetry Log: Not Supported 00:28:50.738 Error Log Page Entries Supported: 128 00:28:50.738 Keep Alive: Supported 00:28:50.738 Keep Alive Granularity: 1000 ms 00:28:50.738 00:28:50.738 NVM Command Set Attributes 00:28:50.738 ========================== 00:28:50.738 Submission Queue Entry Size 00:28:50.738 Max: 64 00:28:50.738 Min: 64 00:28:50.738 Completion Queue Entry Size 00:28:50.738 Max: 16 00:28:50.738 Min: 16 00:28:50.738 Number of Namespaces: 1024 00:28:50.738 Compare Command: Not Supported 00:28:50.738 Write Uncorrectable Command: Not Supported 00:28:50.738 Dataset Management Command: Supported 00:28:50.738 Write Zeroes Command: Supported 00:28:50.738 Set Features Save Field: Not Supported 00:28:50.738 Reservations: Not Supported 00:28:50.738 Timestamp: Not Supported 00:28:50.738 Copy: Not Supported 00:28:50.738 Volatile Write Cache: Present 00:28:50.738 Atomic Write Unit (Normal): 1 00:28:50.738 Atomic Write Unit (PFail): 1 00:28:50.738 Atomic Compare & Write Unit: 1 00:28:50.738 Fused Compare & Write: Not Supported 00:28:50.738 Scatter-Gather List 00:28:50.738 SGL Command Set: Supported 00:28:50.738 SGL Keyed: Not Supported 00:28:50.738 SGL Bit Bucket Descriptor: Not Supported 00:28:50.738 SGL Metadata Pointer: Not Supported 00:28:50.738 Oversized SGL: Not Supported 00:28:50.738 SGL Metadata Address: Not Supported 00:28:50.738 SGL Offset: Supported 00:28:50.738 Transport SGL Data Block: Not Supported 00:28:50.738 Replay Protected Memory Block: Not Supported 00:28:50.738 00:28:50.738 Firmware Slot Information 00:28:50.738 ========================= 00:28:50.738 Active slot: 0 00:28:50.738 00:28:50.738 Asymmetric Namespace Access 00:28:50.738 =========================== 00:28:50.738 Change Count : 0 00:28:50.738 Number of ANA Group Descriptors : 1 00:28:50.738 ANA Group Descriptor : 0 00:28:50.738 ANA Group ID : 1 00:28:50.738 Number of NSID Values : 1 00:28:50.738 Change Count : 0 00:28:50.739 ANA State : 1 00:28:50.739 Namespace Identifier : 1 00:28:50.739 00:28:50.739 Commands Supported and Effects 00:28:50.739 ============================== 00:28:50.739 Admin Commands 00:28:50.739 -------------- 00:28:50.739 Get Log Page (02h): Supported 00:28:50.739 Identify (06h): Supported 00:28:50.739 Abort (08h): Supported 00:28:50.739 Set Features (09h): Supported 00:28:50.739 Get Features (0Ah): Supported 00:28:50.739 Asynchronous Event Request (0Ch): Supported 00:28:50.739 Keep Alive (18h): Supported 00:28:50.739 I/O Commands 00:28:50.739 ------------ 00:28:50.739 Flush (00h): Supported 00:28:50.739 Write (01h): Supported LBA-Change 00:28:50.739 Read (02h): Supported 00:28:50.739 Write Zeroes (08h): Supported LBA-Change 00:28:50.739 Dataset Management (09h): Supported 00:28:50.739 00:28:50.739 Error Log 00:28:50.739 ========= 00:28:50.739 Entry: 0 00:28:50.739 Error Count: 0x3 00:28:50.739 Submission Queue Id: 0x0 00:28:50.739 Command Id: 0x5 00:28:50.739 Phase Bit: 0 00:28:50.739 Status Code: 0x2 00:28:50.739 Status Code Type: 0x0 00:28:50.739 Do Not Retry: 1 00:28:50.739 Error Location: 0x28 00:28:50.739 LBA: 0x0 00:28:50.739 Namespace: 0x0 00:28:50.739 Vendor Log Page: 0x0 00:28:50.739 ----------- 00:28:50.739 Entry: 1 00:28:50.739 Error Count: 0x2 00:28:50.739 Submission Queue Id: 0x0 00:28:50.739 Command Id: 0x5 00:28:50.739 Phase Bit: 0 00:28:50.739 Status Code: 0x2 00:28:50.739 Status Code Type: 0x0 00:28:50.739 Do Not Retry: 1 00:28:50.739 Error Location: 0x28 00:28:50.739 LBA: 0x0 00:28:50.739 Namespace: 0x0 00:28:50.739 Vendor Log Page: 0x0 00:28:50.739 ----------- 00:28:50.739 Entry: 2 00:28:50.739 Error Count: 0x1 00:28:50.739 Submission Queue Id: 0x0 00:28:50.739 Command Id: 0x4 00:28:50.739 Phase Bit: 0 00:28:50.739 Status Code: 0x2 00:28:50.739 Status Code Type: 0x0 00:28:50.739 Do Not Retry: 1 00:28:50.739 Error Location: 0x28 00:28:50.739 LBA: 0x0 00:28:50.739 Namespace: 0x0 00:28:50.739 Vendor Log Page: 0x0 00:28:50.739 00:28:50.739 Number of Queues 00:28:50.739 ================ 00:28:50.739 Number of I/O Submission Queues: 128 00:28:50.739 Number of I/O Completion Queues: 128 00:28:50.739 00:28:50.739 ZNS Specific Controller Data 00:28:50.739 ============================ 00:28:50.739 Zone Append Size Limit: 0 00:28:50.739 00:28:50.739 00:28:50.739 Active Namespaces 00:28:50.739 ================= 00:28:50.739 get_feature(0x05) failed 00:28:50.739 Namespace ID:1 00:28:50.739 Command Set Identifier: NVM (00h) 00:28:50.739 Deallocate: Supported 00:28:50.739 Deallocated/Unwritten Error: Not Supported 00:28:50.739 Deallocated Read Value: Unknown 00:28:50.739 Deallocate in Write Zeroes: Not Supported 00:28:50.739 Deallocated Guard Field: 0xFFFF 00:28:50.739 Flush: Supported 00:28:50.739 Reservation: Not Supported 00:28:50.739 Namespace Sharing Capabilities: Multiple Controllers 00:28:50.739 Size (in LBAs): 3907029168 (1863GiB) 00:28:50.739 Capacity (in LBAs): 3907029168 (1863GiB) 00:28:50.739 Utilization (in LBAs): 3907029168 (1863GiB) 00:28:50.739 UUID: edc45700-e439-4c03-bc61-2a500d4fa6c7 00:28:50.739 Thin Provisioning: Not Supported 00:28:50.739 Per-NS Atomic Units: Yes 00:28:50.739 Atomic Boundary Size (Normal): 0 00:28:50.739 Atomic Boundary Size (PFail): 0 00:28:50.739 Atomic Boundary Offset: 0 00:28:50.739 NGUID/EUI64 Never Reused: No 00:28:50.739 ANA group ID: 1 00:28:50.739 Namespace Write Protected: No 00:28:50.739 Number of LBA Formats: 1 00:28:50.739 Current LBA Format: LBA Format #00 00:28:50.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:50.739 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.739 rmmod nvme_tcp 00:28:50.739 rmmod nvme_fabrics 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.739 11:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:52.653 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:52.913 11:35:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.124 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.124 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.041 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:28:59.041 00:28:59.041 real 0m21.729s 00:28:59.041 user 0m5.288s 00:28:59.041 sys 0m11.583s 00:28:59.041 11:35:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.041 11:35:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.041 ************************************ 00:28:59.041 END TEST nvmf_identify_kernel_target 00:28:59.041 ************************************ 00:28:59.041 11:35:55 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:59.042 11:35:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:59.042 11:35:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:59.042 11:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.042 ************************************ 00:28:59.042 START TEST nvmf_auth_host 00:28:59.042 ************************************ 00:28:59.042 11:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:59.042 * Looking for test storage... 00:28:59.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.042 11:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.191 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.191 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.191 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.192 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.192 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.452 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.452 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.452 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:29:07.452 00:29:07.452 --- 10.0.0.2 ping statistics --- 00:29:07.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.453 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:29:07.453 00:29:07.453 --- 10.0.0.1 ping statistics --- 00:29:07.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.453 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1701345 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1701345 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1701345 ']' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:07.453 11:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54baebcfb1c8204bb64e924301e3b06c 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MYH 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54baebcfb1c8204bb64e924301e3b06c 0 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54baebcfb1c8204bb64e924301e3b06c 0 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.395 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54baebcfb1c8204bb64e924301e3b06c 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MYH 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MYH 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.MYH 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b0591254647e8eda7b84387ba6a4570f4da6e8187daa3a6c8ad75cd284924db8 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lFT 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b0591254647e8eda7b84387ba6a4570f4da6e8187daa3a6c8ad75cd284924db8 3 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b0591254647e8eda7b84387ba6a4570f4da6e8187daa3a6c8ad75cd284924db8 3 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b0591254647e8eda7b84387ba6a4570f4da6e8187daa3a6c8ad75cd284924db8 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.396 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lFT 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lFT 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lFT 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9d5ebc46dac654819fc091a3d1bc4e0fd63a7b9a7022982 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FuN 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9d5ebc46dac654819fc091a3d1bc4e0fd63a7b9a7022982 0 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9d5ebc46dac654819fc091a3d1bc4e0fd63a7b9a7022982 0 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9d5ebc46dac654819fc091a3d1bc4e0fd63a7b9a7022982 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FuN 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FuN 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FuN 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1aeb81a30fb538d897ff4b1d469db7e0748b801a6ab1224b 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xYZ 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1aeb81a30fb538d897ff4b1d469db7e0748b801a6ab1224b 2 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1aeb81a30fb538d897ff4b1d469db7e0748b801a6ab1224b 2 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1aeb81a30fb538d897ff4b1d469db7e0748b801a6ab1224b 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xYZ 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xYZ 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xYZ 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=055bcca9c324f46024fbf6a24e5a984a 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Us 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 055bcca9c324f46024fbf6a24e5a984a 1 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 055bcca9c324f46024fbf6a24e5a984a 1 00:29:08.657 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=055bcca9c324f46024fbf6a24e5a984a 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Us 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Us 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8Us 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0c2fbae7239650cc8177abf87d8bc00 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.43k 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0c2fbae7239650cc8177abf87d8bc00 1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0c2fbae7239650cc8177abf87d8bc00 1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0c2fbae7239650cc8177abf87d8bc00 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:08.658 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.43k 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.43k 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.43k 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.919 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd01bf6cd0a538ebb2279ed57b17cbc8c1fd4abb9211a59f 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SFj 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd01bf6cd0a538ebb2279ed57b17cbc8c1fd4abb9211a59f 2 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd01bf6cd0a538ebb2279ed57b17cbc8c1fd4abb9211a59f 2 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd01bf6cd0a538ebb2279ed57b17cbc8c1fd4abb9211a59f 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SFj 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SFj 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SFj 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d2bc1369e62e994de4d772603ffb885c 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.soT 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d2bc1369e62e994de4d772603ffb885c 0 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d2bc1369e62e994de4d772603ffb885c 0 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d2bc1369e62e994de4d772603ffb885c 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:08.920 11:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.soT 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.soT 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.soT 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40a23bac18a7ca3ae9fb723ab2acead733fd1260c8f5f3052287c87deb378b69 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nEe 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40a23bac18a7ca3ae9fb723ab2acead733fd1260c8f5f3052287c87deb378b69 3 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40a23bac18a7ca3ae9fb723ab2acead733fd1260c8f5f3052287c87deb378b69 3 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40a23bac18a7ca3ae9fb723ab2acead733fd1260c8f5f3052287c87deb378b69 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nEe 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nEe 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nEe 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1701345 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1701345 ']' 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:08.920 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MYH 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lFT ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lFT 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FuN 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xYZ ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xYZ 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8Us 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.43k ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.43k 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SFj 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.soT ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.soT 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nEe 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.182 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:09.444 11:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:13.648 Waiting for block devices as requested 00:29:13.648 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:13.648 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:13.908 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:29:13.908 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:13.908 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:14.169 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:14.169 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:14.169 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:14.450 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:14.450 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:14.450 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:15.060 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:15.060 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:15.060 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:15.060 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:29:15.061 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:15.061 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:29:15.061 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:15.061 11:36:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:15.061 11:36:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:15.061 No valid GPT data, bailing 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:29:15.322 00:29:15.322 Discovery Log Number of Records 2, Generation counter 2 00:29:15.322 =====Discovery Log Entry 0====== 00:29:15.322 trtype: tcp 00:29:15.322 adrfam: ipv4 00:29:15.322 subtype: current discovery subsystem 00:29:15.322 treq: not specified, sq flow control disable supported 00:29:15.322 portid: 1 00:29:15.322 trsvcid: 4420 00:29:15.322 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:15.322 traddr: 10.0.0.1 00:29:15.322 eflags: none 00:29:15.322 sectype: none 00:29:15.322 =====Discovery Log Entry 1====== 00:29:15.322 trtype: tcp 00:29:15.322 adrfam: ipv4 00:29:15.322 subtype: nvme subsystem 00:29:15.322 treq: not specified, sq flow control disable supported 00:29:15.322 portid: 1 00:29:15.322 trsvcid: 4420 00:29:15.322 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:15.322 traddr: 10.0.0.1 00:29:15.322 eflags: none 00:29:15.322 sectype: none 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.322 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.322 nvme0n1 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.323 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 nvme0n1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.584 11:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.585 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.585 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.585 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.846 nvme0n1 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.846 11:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.846 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.106 nvme0n1 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.106 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.107 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 nvme0n1 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 nvme0n1 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.368 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.629 nvme0n1 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.629 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:16.890 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.891 11:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 nvme0n1 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.151 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.152 nvme0n1 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.152 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:17.412 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.413 nvme0n1 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.413 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 nvme0n1 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:17.674 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.675 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.935 11:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.196 nvme0n1 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.196 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.197 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.458 nvme0n1 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.458 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.719 nvme0n1 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.719 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.720 11:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.980 nvme0n1 00:29:18.980 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.980 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.981 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.241 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.242 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.242 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.242 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.503 nvme0n1 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.503 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.764 nvme0n1 00:29:19.764 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.764 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.764 11:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.764 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.764 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.025 11:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.025 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.285 nvme0n1 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.286 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.546 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.547 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.807 nvme0n1 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.807 11:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.807 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.807 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.807 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.807 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.808 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.808 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.068 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 nvme0n1 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.329 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.330 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.901 nvme0n1 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.901 11:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.901 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.902 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.844 nvme0n1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.844 11:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.415 nvme0n1 00:29:23.415 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.415 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.415 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.416 11:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.358 nvme0n1 00:29:24.358 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.359 11:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.931 nvme0n1 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.931 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.872 nvme0n1 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.872 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.873 11:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.873 nvme0n1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.873 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.133 nvme0n1 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:26.133 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.134 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.394 nvme0n1 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.394 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 nvme0n1 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 nvme0n1 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.655 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.916 11:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.916 nvme0n1 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.916 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.177 nvme0n1 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.177 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 nvme0n1 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.439 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.440 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.700 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.701 nvme0n1 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.701 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.962 11:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.963 11:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.963 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.963 11:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.963 nvme0n1 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.963 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.224 nvme0n1 00:29:28.224 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:28.484 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.485 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 nvme0n1 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.746 11:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.006 nvme0n1 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:29.006 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.007 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.267 nvme0n1 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.267 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.528 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.789 nvme0n1 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.789 11:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.359 nvme0n1 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.359 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.360 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.620 nvme0n1 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.620 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.880 11:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.140 nvme0n1 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.140 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.141 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.711 nvme0n1 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.711 11:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.281 nvme0n1 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.281 11:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.855 nvme0n1 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.855 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.167 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.765 nvme0n1 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.765 11:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.705 nvme0n1 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.705 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.706 11:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.277 nvme0n1 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.277 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.278 11:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.220 nvme0n1 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.220 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 nvme0n1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.221 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.483 nvme0n1 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.483 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.744 nvme0n1 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.744 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.745 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.005 nvme0n1 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.005 11:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.005 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.006 nvme0n1 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.006 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.266 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.267 nvme0n1 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.267 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.527 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.528 nvme0n1 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.528 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.788 nvme0n1 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.788 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.789 11:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 nvme0n1 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.049 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.310 nvme0n1 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.310 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.311 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.311 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.311 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.570 nvme0n1 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.570 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.831 11:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.831 nvme0n1 00:29:38.831 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.831 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.831 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.831 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.831 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:39.092 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.093 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.354 nvme0n1 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.354 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.355 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.616 nvme0n1 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.616 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.617 11:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.878 nvme0n1 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:39.878 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.139 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.400 nvme0n1 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:40.400 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.401 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.661 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.661 11:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.661 11:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.661 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.661 11:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.921 nvme0n1 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.921 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.493 nvme0n1 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.493 11:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.064 nvme0n1 00:29:42.064 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.064 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.064 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.064 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.064 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.065 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.634 nvme0n1 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRiYWViY2ZiMWM4MjA0YmI2NGU5MjQzMDFlM2IwNmOV5pwK: 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA1OTEyNTQ2NDdlOGVkYTdiODQzODdiYTZhNDU3MGY0ZGE2ZTgxODdkYWEzYTZjOGFkNzVjZDI4NDkyNGRiOFGlY4U=: 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.634 11:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 nvme0n1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.203 11:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.143 nvme0n1 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU1YmNjYTljMzI0ZjQ2MDI0ZmJmNmEyNGU1YTk4NGGp2kn7: 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzBjMmZiYWU3MjM5NjUwY2M4MTc3YWJmODdkOGJjMDDZldRw: 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.143 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.713 nvme0n1 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2QwMWJmNmNkMGE1MzhlYmIyMjc5ZWQ1N2IxN2NiYzhjMWZkNGFiYjkyMTFhNTlmltqEnQ==: 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDJiYzEzNjllNjJlOTk0ZGU0ZDc3MjYwM2ZmYjg4NWPIBHf5: 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.713 11:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:44.973 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.973 11:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.542 nvme0n1 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:45.542 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDBhMjNiYWMxOGE3Y2EzYWU5ZmI3MjNhYjJhY2VhZDczM2ZkMTI2MGM4ZjVmMzA1MjI4N2M4N2RlYjM3OGI2OcxlA6s=: 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.543 11:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 nvme0n1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDlkNWViYzQ2ZGFjNjU0ODE5ZmMwOTFhM2QxYmM0ZTBmZDYzYTdiOWE3MDIyOTgyehNWFQ==: 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFlYjgxYTMwZmI1MzhkODk3ZmY0YjFkNDY5ZGI3ZTA3NDhiODAxYTZhYjEyMjRiNnqJ2g==: 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 request: 00:29:46.484 { 00:29:46.484 "name": "nvme0", 00:29:46.484 "trtype": "tcp", 00:29:46.484 "traddr": "10.0.0.1", 00:29:46.484 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.484 "adrfam": "ipv4", 00:29:46.484 "trsvcid": "4420", 00:29:46.484 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.484 "method": "bdev_nvme_attach_controller", 00:29:46.484 "req_id": 1 00:29:46.484 } 00:29:46.484 Got JSON-RPC error response 00:29:46.484 response: 00:29:46.484 { 00:29:46.484 "code": -5, 00:29:46.484 "message": "Input/output error" 00:29:46.484 } 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.484 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.485 request: 00:29:46.485 { 00:29:46.485 "name": "nvme0", 00:29:46.485 "trtype": "tcp", 00:29:46.485 "traddr": "10.0.0.1", 00:29:46.485 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.485 "adrfam": "ipv4", 00:29:46.485 "trsvcid": "4420", 00:29:46.485 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.485 "dhchap_key": "key2", 00:29:46.485 "method": "bdev_nvme_attach_controller", 00:29:46.485 "req_id": 1 00:29:46.485 } 00:29:46.485 Got JSON-RPC error response 00:29:46.485 response: 00:29:46.485 { 00:29:46.485 "code": -5, 00:29:46.485 "message": "Input/output error" 00:29:46.485 } 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.485 request: 00:29:46.485 { 00:29:46.485 "name": "nvme0", 00:29:46.485 "trtype": "tcp", 00:29:46.485 "traddr": "10.0.0.1", 00:29:46.485 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.485 "adrfam": "ipv4", 00:29:46.485 "trsvcid": "4420", 00:29:46.485 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.485 "dhchap_key": "key1", 00:29:46.485 "dhchap_ctrlr_key": "ckey2", 00:29:46.485 "method": "bdev_nvme_attach_controller", 00:29:46.485 "req_id": 1 00:29:46.485 } 00:29:46.485 Got JSON-RPC error response 00:29:46.485 response: 00:29:46.485 { 00:29:46.485 "code": -5, 00:29:46.485 "message": "Input/output error" 00:29:46.485 } 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.485 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.485 rmmod nvme_tcp 00:29:46.746 rmmod nvme_fabrics 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1701345 ']' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1701345 ']' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1701345' 00:29:46.746 killing process with pid 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1701345 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.746 11:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.293 11:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.293 11:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:49.293 11:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:49.293 11:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:53.501 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:53.501 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:53.502 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:54.888 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:29:54.888 11:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.MYH /tmp/spdk.key-null.FuN /tmp/spdk.key-sha256.8Us /tmp/spdk.key-sha384.SFj /tmp/spdk.key-sha512.nEe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:54.888 11:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:59.194 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:59.194 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:59.194 00:29:59.194 real 0m59.990s 00:29:59.194 user 0m51.074s 00:29:59.194 sys 0m16.703s 00:29:59.194 11:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:59.194 11:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.194 ************************************ 00:29:59.194 END TEST nvmf_auth_host 00:29:59.194 ************************************ 00:29:59.194 11:36:56 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:59.194 11:36:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:59.194 11:36:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:59.194 11:36:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:59.194 11:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.194 ************************************ 00:29:59.194 START TEST nvmf_digest 00:29:59.194 ************************************ 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:59.194 * Looking for test storage... 00:29:59.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.194 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.195 11:36:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.348 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.349 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.349 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.349 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:07.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:30:07.349 00:30:07.349 --- 10.0.0.2 ping statistics --- 00:30:07.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.349 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:30:07.349 00:30:07.349 --- 10.0.0.1 ping statistics --- 00:30:07.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.349 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:30:07.349 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.610 11:37:04 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.611 ************************************ 00:30:07.611 START TEST nvmf_digest_clean 00:30:07.611 ************************************ 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1717500 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1717500 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1717500 ']' 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:07.611 11:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.611 [2024-06-10 11:37:04.686504] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:07.611 [2024-06-10 11:37:04.686556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.611 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.611 [2024-06-10 11:37:04.775848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.871 [2024-06-10 11:37:04.838042] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.871 [2024-06-10 11:37:04.838081] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.871 [2024-06-10 11:37:04.838088] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.871 [2024-06-10 11:37:04.838094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.871 [2024-06-10 11:37:04.838100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.871 [2024-06-10 11:37:04.838119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:08.443 null0 00:30:08.443 [2024-06-10 11:37:05.632533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.443 [2024-06-10 11:37:05.656764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1717545 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1717545 /var/tmp/bperf.sock 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1717545 ']' 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:08.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:08.443 11:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:08.703 [2024-06-10 11:37:05.715020] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:08.703 [2024-06-10 11:37:05.715079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717545 ] 00:30:08.703 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.703 [2024-06-10 11:37:05.781752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.703 [2024-06-10 11:37:05.853069] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.643 11:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.212 nvme0n1 00:30:10.212 11:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:10.212 11:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.212 Running I/O for 2 seconds... 00:30:12.750 00:30:12.750 Latency(us) 00:30:12.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.750 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:12.751 nvme0n1 : 2.00 22040.28 86.09 0.00 0.00 5799.81 2911.31 13611.32 00:30:12.751 =================================================================================================================== 00:30:12.751 Total : 22040.28 86.09 0.00 0.00 5799.81 2911.31 13611.32 00:30:12.751 0 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.751 | select(.opcode=="crc32c") 00:30:12.751 | "\(.module_name) \(.executed)"' 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1717545 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1717545 ']' 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1717545 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1717545 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1717545' 00:30:12.751 killing process with pid 1717545 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1717545 00:30:12.751 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.751 00:30:12.751 Latency(us) 00:30:12.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.751 =================================================================================================================== 00:30:12.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1717545 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1718258 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1718258 /var/tmp/bperf.sock 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1718258 ']' 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.751 [2024-06-10 11:37:09.794635] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:12.751 [2024-06-10 11:37:09.794689] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718258 ] 00:30:12.751 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.751 Zero copy mechanism will not be used. 00:30:12.751 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.751 [2024-06-10 11:37:09.856765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.751 [2024-06-10 11:37:09.916830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:12.751 11:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.011 11:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.011 11:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.580 nvme0n1 00:30:13.580 11:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:13.580 11:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:13.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:13.580 Zero copy mechanism will not be used. 00:30:13.580 Running I/O for 2 seconds... 00:30:15.493 00:30:15.493 Latency(us) 00:30:15.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.493 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:15.493 nvme0n1 : 2.00 3270.30 408.79 0.00 0.00 4888.58 1531.27 12451.84 00:30:15.493 =================================================================================================================== 00:30:15.493 Total : 3270.30 408.79 0.00 0.00 4888.58 1531.27 12451.84 00:30:15.493 0 00:30:15.493 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:15.493 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:15.493 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:15.493 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:15.493 | select(.opcode=="crc32c") 00:30:15.493 | "\(.module_name) \(.executed)"' 00:30:15.493 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1718258 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1718258 ']' 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1718258 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1718258 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1718258' 00:30:15.754 killing process with pid 1718258 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1718258 00:30:15.754 Received shutdown signal, test time was about 2.000000 seconds 00:30:15.754 00:30:15.754 Latency(us) 00:30:15.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.754 =================================================================================================================== 00:30:15.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.754 11:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1718258 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1718786 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1718786 /var/tmp/bperf.sock 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1718786 ']' 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:16.016 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:16.016 [2024-06-10 11:37:13.133335] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:16.016 [2024-06-10 11:37:13.133388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718786 ] 00:30:16.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.016 [2024-06-10 11:37:13.193783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.277 [2024-06-10 11:37:13.254689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.277 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:16.277 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:16.277 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:16.277 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:16.277 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:16.538 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.538 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.798 nvme0n1 00:30:16.798 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:16.798 11:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.058 Running I/O for 2 seconds... 00:30:18.969 00:30:18.969 Latency(us) 00:30:18.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.969 nvme0n1 : 2.00 23875.98 93.27 0.00 0.00 5352.99 2810.49 11746.07 00:30:18.969 =================================================================================================================== 00:30:18.969 Total : 23875.98 93.27 0.00 0.00 5352.99 2810.49 11746.07 00:30:18.969 0 00:30:18.969 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:18.969 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:18.969 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:18.969 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:18.969 | select(.opcode=="crc32c") 00:30:18.969 | "\(.module_name) \(.executed)"' 00:30:18.969 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:19.229 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:19.229 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:19.229 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:19.229 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:19.229 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1718786 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1718786 ']' 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1718786 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1718786 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1718786' 00:30:19.230 killing process with pid 1718786 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1718786 00:30:19.230 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.230 00:30:19.230 Latency(us) 00:30:19.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.230 =================================================================================================================== 00:30:19.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.230 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1718786 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1719382 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1719382 /var/tmp/bperf.sock 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1719382 ']' 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.491 [2024-06-10 11:37:16.521573] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:19.491 [2024-06-10 11:37:16.521624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719382 ] 00:30:19.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:19.491 Zero copy mechanism will not be used. 00:30:19.491 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.491 [2024-06-10 11:37:16.583056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.491 [2024-06-10 11:37:16.641189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:19.491 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:19.751 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.751 11:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.010 nvme0n1 00:30:20.010 11:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:20.011 11:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.270 Zero copy mechanism will not be used. 00:30:20.270 Running I/O for 2 seconds... 00:30:22.179 00:30:22.179 Latency(us) 00:30:22.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.179 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:22.179 nvme0n1 : 2.00 4126.43 515.80 0.00 0.00 3871.14 1978.68 12502.25 00:30:22.179 =================================================================================================================== 00:30:22.179 Total : 4126.43 515.80 0.00 0.00 3871.14 1978.68 12502.25 00:30:22.179 0 00:30:22.179 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:22.179 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:22.179 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:22.179 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:22.179 | select(.opcode=="crc32c") 00:30:22.179 | "\(.module_name) \(.executed)"' 00:30:22.179 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1719382 00:30:22.439 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1719382 ']' 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1719382 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1719382 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1719382' 00:30:22.440 killing process with pid 1719382 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1719382 00:30:22.440 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.440 00:30:22.440 Latency(us) 00:30:22.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.440 =================================================================================================================== 00:30:22.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.440 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1719382 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1717500 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1717500 ']' 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1717500 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1717500 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1717500' 00:30:22.700 killing process with pid 1717500 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1717500 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1717500 00:30:22.700 00:30:22.700 real 0m15.231s 00:30:22.700 user 0m29.988s 00:30:22.700 sys 0m3.393s 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:22.700 ************************************ 00:30:22.700 END TEST nvmf_digest_clean 00:30:22.700 ************************************ 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:22.700 11:37:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:22.961 ************************************ 00:30:22.961 START TEST nvmf_digest_error 00:30:22.961 ************************************ 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1720025 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1720025 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1720025 ']' 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:22.961 11:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:22.961 [2024-06-10 11:37:20.007660] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:22.961 [2024-06-10 11:37:20.007707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.961 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.961 [2024-06-10 11:37:20.095712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.961 [2024-06-10 11:37:20.158190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.961 [2024-06-10 11:37:20.158223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.961 [2024-06-10 11:37:20.158230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.961 [2024-06-10 11:37:20.158236] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.961 [2024-06-10 11:37:20.158241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.961 [2024-06-10 11:37:20.158263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.903 [2024-06-10 11:37:20.892325] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:23.903 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.904 null0 00:30:23.904 [2024-06-10 11:37:20.971003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.904 [2024-06-10 11:37:20.995168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:23.904 11:37:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1720171 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1720171 /var/tmp/bperf.sock 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1720171 ']' 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:23.904 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.904 [2024-06-10 11:37:21.048406] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:23.904 [2024-06-10 11:37:21.048453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720171 ] 00:30:23.904 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.904 [2024-06-10 11:37:21.109305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.165 [2024-06-10 11:37:21.170624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.165 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:24.165 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:24.165 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.165 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.426 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.687 nvme0n1 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:24.687 11:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.687 Running I/O for 2 seconds... 00:30:24.687 [2024-06-10 11:37:21.863816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.687 [2024-06-10 11:37:21.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.687 [2024-06-10 11:37:21.863865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.687 [2024-06-10 11:37:21.874886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.687 [2024-06-10 11:37:21.874910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.687 [2024-06-10 11:37:21.874923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.687 [2024-06-10 11:37:21.889966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.687 [2024-06-10 11:37:21.889988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.687 [2024-06-10 11:37:21.889996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.687 [2024-06-10 11:37:21.903615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.687 [2024-06-10 11:37:21.903636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.687 [2024-06-10 11:37:21.903644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.951 [2024-06-10 11:37:21.914502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.951 [2024-06-10 11:37:21.914521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.951 [2024-06-10 11:37:21.914530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.951 [2024-06-10 11:37:21.929034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.951 [2024-06-10 11:37:21.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.951 [2024-06-10 11:37:21.929062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.951 [2024-06-10 11:37:21.941136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.951 [2024-06-10 11:37:21.941156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.941164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:21.953989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:21.954009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:21.963394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:21.963414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.963423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:21.975427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:21.975447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.975455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:21.986703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:21.986723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.986731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:21.996986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:21.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:21.997014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.010514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.010534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.010542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.025316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.025335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.025343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.040311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.040331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.040339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.050378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.050397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.050405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.064291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.064312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.064324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.079713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.079733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.092769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.092788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.092797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.102541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.102560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.102568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.114994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.115014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.115021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.126393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.126414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.126421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.138196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.138215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.138223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.148130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.952 [2024-06-10 11:37:22.148150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.952 [2024-06-10 11:37:22.148158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.952 [2024-06-10 11:37:22.159920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.953 [2024-06-10 11:37:22.159940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.953 [2024-06-10 11:37:22.159948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.953 [2024-06-10 11:37:22.171870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:24.953 [2024-06-10 11:37:22.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.953 [2024-06-10 11:37:22.171900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.181502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.181522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.181530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.193279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.193307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.206841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.206870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.217153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.217181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.230756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.230780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.230789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.241165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.241184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.241192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.255840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.255860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.255868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.269097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.269117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.269128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.279304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.279323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.279332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.293845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.293865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.293873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.304712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.304731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.304740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.319313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.319334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.319342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.331397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.331417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.249 [2024-06-10 11:37:22.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.249 [2024-06-10 11:37:22.342364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.249 [2024-06-10 11:37:22.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.342391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.352913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.352933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.352941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.365332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.365351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.365359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.376752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.376774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.376782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.387585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.387614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.400007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.400028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.400036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.411947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.411968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.411976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.423141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.423161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.423169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.434665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.434685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.434693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.443985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.444005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.444013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.456732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.456752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.456760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.250 [2024-06-10 11:37:22.468668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.250 [2024-06-10 11:37:22.468687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.250 [2024-06-10 11:37:22.468695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.479914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.479941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.490769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.490789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.490797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.502681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.502701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.502709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.513143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.513163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.513171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.524891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.524910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.524918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.536877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.536896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.536904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.547756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.547775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.547783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.559418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.559437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.559446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.570182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.570201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.570213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.581474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.581494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.581501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.524 [2024-06-10 11:37:22.593950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.524 [2024-06-10 11:37:22.593973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.524 [2024-06-10 11:37:22.593981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.603887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.603907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.603915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.615315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.615335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.615343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.626765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.626785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.626793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.638704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.638724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.638732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.648780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.648802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.648810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.661456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.661476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.661484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.672727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.672751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.672759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.684802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.684827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.684836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.694369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.694390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.694398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.709758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.709777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.709786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.721559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.721579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.721586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.733762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.733781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.733789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.525 [2024-06-10 11:37:22.743809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.525 [2024-06-10 11:37:22.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.525 [2024-06-10 11:37:22.743841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.756170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.756190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.756198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.770212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.770232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.770240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.779999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.780018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.780027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.792108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.792129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.792137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.806655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.806675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.806682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.816949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.816968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.816977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.831937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.831957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.831965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.844005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.844025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.844033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.854837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.854856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.854864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.866164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.866183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.878085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.878105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.887677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.887696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.887705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.900733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.900753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.900761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.914017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.914037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.914046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.924851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.924871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.924879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.938400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.938420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.938428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.952674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.952693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.952702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.963449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.963469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.963477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.975303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.975323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.975331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.986270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.787 [2024-06-10 11:37:22.986290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.787 [2024-06-10 11:37:22.986298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.787 [2024-06-10 11:37:22.999565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:25.788 [2024-06-10 11:37:22.999585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.788 [2024-06-10 11:37:22.999593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.011812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.011839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.011847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.022005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.022025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.022033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.035337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.035357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.035365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.045104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.045126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.045136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.059203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.059232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.072287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.072307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.084497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.084517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.084529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.094668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.094688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.094697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.106827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.106847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.106855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.049 [2024-06-10 11:37:23.119974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.049 [2024-06-10 11:37:23.119994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.049 [2024-06-10 11:37:23.120002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.130809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.130834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.130843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.145538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.145559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.156583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.156603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.156612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.168172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.168192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.168200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.180435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.180456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.180464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.190655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.190678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.190686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.202016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.202036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.202044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.213341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.213362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.213370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.225061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.225081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.225090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.237022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.237042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.237050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.248081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.248101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.248109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.260198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.260218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.260226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.050 [2024-06-10 11:37:23.269708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.050 [2024-06-10 11:37:23.269727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.050 [2024-06-10 11:37:23.269736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.282349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.282369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.282378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.294004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.294024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.294032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.304903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.304922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.317766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.317786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.317794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.330658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.330678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.330687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.340149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.340169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.340177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.352310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.352330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.352338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.366344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.366364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.366372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.379158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.379178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.379186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.390420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.390439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.390454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.403158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.403178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.403186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.413809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.413842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.424081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.424101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.424109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.436657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.436677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.449123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.449143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.449151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.459175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.459194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.459202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.470580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.470605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.470616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.482548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.312 [2024-06-10 11:37:23.482568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.312 [2024-06-10 11:37:23.482576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.312 [2024-06-10 11:37:23.493217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.313 [2024-06-10 11:37:23.493236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.313 [2024-06-10 11:37:23.493244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.313 [2024-06-10 11:37:23.504512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.313 [2024-06-10 11:37:23.504532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.313 [2024-06-10 11:37:23.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.313 [2024-06-10 11:37:23.518537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.313 [2024-06-10 11:37:23.518557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.313 [2024-06-10 11:37:23.518568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.313 [2024-06-10 11:37:23.529180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.313 [2024-06-10 11:37:23.529199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.313 [2024-06-10 11:37:23.529207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.539311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.539340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.551437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.551456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.551465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.566301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.566330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.581012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.581033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.581041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.593928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.593949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.593961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.604441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.604461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.604470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.619194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.619214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.619222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.633727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.633746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.633754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.643517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.643536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.643544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.655333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.655352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.655360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.667446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.667465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.667473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.678428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.678447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.678455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.688868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.688887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.688895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.701350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.701373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.701381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.712925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.723226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.723246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.734994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.735014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.735022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.747400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.747419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.747428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.757998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.574 [2024-06-10 11:37:23.758018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.574 [2024-06-10 11:37:23.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.574 [2024-06-10 11:37:23.769009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.575 [2024-06-10 11:37:23.769028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.575 [2024-06-10 11:37:23.769037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.575 [2024-06-10 11:37:23.781793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.575 [2024-06-10 11:37:23.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.575 [2024-06-10 11:37:23.781825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.575 [2024-06-10 11:37:23.792759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.575 [2024-06-10 11:37:23.792778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.575 [2024-06-10 11:37:23.792787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.836 [2024-06-10 11:37:23.803970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.836 [2024-06-10 11:37:23.803989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.836 [2024-06-10 11:37:23.803998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.836 [2024-06-10 11:37:23.817170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.836 [2024-06-10 11:37:23.817190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.836 [2024-06-10 11:37:23.817198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.836 [2024-06-10 11:37:23.830267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.836 [2024-06-10 11:37:23.830286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.836 [2024-06-10 11:37:23.830295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.836 [2024-06-10 11:37:23.841772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a6f80) 00:30:26.836 [2024-06-10 11:37:23.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.836 [2024-06-10 11:37:23.841800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.836 00:30:26.836 Latency(us) 00:30:26.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.836 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:26.836 nvme0n1 : 2.00 21344.18 83.38 0.00 0.00 5989.71 2860.90 19156.68 00:30:26.836 =================================================================================================================== 00:30:26.836 Total : 21344.18 83.38 0.00 0.00 5989.71 2860.90 19156.68 00:30:26.836 0 00:30:26.836 11:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:26.836 11:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:26.836 11:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:26.836 | .driver_specific 00:30:26.836 | .nvme_error 00:30:26.836 | .status_code 00:30:26.836 | .command_transient_transport_error' 00:30:26.836 11:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1720171 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1720171 ']' 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1720171 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:26.836 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1720171 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1720171' 00:30:27.097 killing process with pid 1720171 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1720171 00:30:27.097 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.097 00:30:27.097 Latency(us) 00:30:27.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.097 =================================================================================================================== 00:30:27.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1720171 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1720677 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1720677 /var/tmp/bperf.sock 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1720677 ']' 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:27.097 11:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.097 [2024-06-10 11:37:24.262769] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:27.097 [2024-06-10 11:37:24.262825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720677 ] 00:30:27.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.097 Zero copy mechanism will not be used. 00:30:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.358 [2024-06-10 11:37:24.324536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.358 [2024-06-10 11:37:24.383142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.929 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:27.929 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:27.929 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.929 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.190 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.451 nvme0n1 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:28.451 11:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.712 Zero copy mechanism will not be used. 00:30:28.712 Running I/O for 2 seconds... 00:30:28.712 [2024-06-10 11:37:25.781102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.781143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.781153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.789742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.789767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.789776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.797805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.797833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.797841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.804837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.804858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.804866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.811880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.811900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.811909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.819784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.819811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.819818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.827268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.827290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.827298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.837029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.837051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.837060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.847005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.847026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.847034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.855300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.855321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.855329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.863913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.863934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.863942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.872568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.872589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.872597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.880443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.880464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.880472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.888259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.888280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.888289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.895369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.895390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.895398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.903746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.712 [2024-06-10 11:37:25.903767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.712 [2024-06-10 11:37:25.903775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.712 [2024-06-10 11:37:25.913362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.713 [2024-06-10 11:37:25.913383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.713 [2024-06-10 11:37:25.913391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.713 [2024-06-10 11:37:25.921844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.713 [2024-06-10 11:37:25.921864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.713 [2024-06-10 11:37:25.921872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.713 [2024-06-10 11:37:25.928129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.713 [2024-06-10 11:37:25.928149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.713 [2024-06-10 11:37:25.928157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.713 [2024-06-10 11:37:25.934034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.713 [2024-06-10 11:37:25.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.713 [2024-06-10 11:37:25.934063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.941308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.941329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.947951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.947971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.952619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.952639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.952654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.959454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.959475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.959482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.967963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.967984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.967991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.977019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.977048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.988067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.988088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.988096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:25.995680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:25.995700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:25.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.002826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.002846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.002854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.011767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.011787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.011795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.019275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.019296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.024469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.024497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.024505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.034781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.034802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.034810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.042766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.042788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.050805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.050830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.050838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.057719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.057740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.057748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.065105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.065126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.065134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.072568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.072590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.080078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.080098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.080106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.089100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.089122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.097913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.097935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.097943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.105510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.105531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.974 [2024-06-10 11:37:26.105539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.974 [2024-06-10 11:37:26.110623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.974 [2024-06-10 11:37:26.110644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.118649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.118671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.118679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.124899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.124920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.124928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.131212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.131233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.131241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.139373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.139394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.139402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.146335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.146356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.146364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.154092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.154112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.154123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.161045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.161065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.161073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.168003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.168024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.168032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.174610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.174630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.174638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.181854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.181874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.181883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.975 [2024-06-10 11:37:26.192163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:28.975 [2024-06-10 11:37:26.192184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.975 [2024-06-10 11:37:26.192193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.197719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.197748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.205085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.205106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.205114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.210908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.210929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.210937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.218651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.218685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.226604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.226636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.236511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.236532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.236540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.244913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.244933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.244942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.253665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.253685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.253694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.263127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.263148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.263157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.271840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.271861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.271869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.279455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.279475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.279483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.287171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.287191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.287199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.295032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.295052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.295061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.304028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.304048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.304057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.309185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.239 [2024-06-10 11:37:26.309205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.239 [2024-06-10 11:37:26.309214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.239 [2024-06-10 11:37:26.318502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.318522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.318532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.327477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.327501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.327510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.337505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.337527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.337536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.345906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.345927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.345935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.352396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.352416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.352425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.359906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.359926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.359939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.366086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.366107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.366116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.374564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.374584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.381999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.382028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.387361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.387381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.395208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.395229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.395237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.405616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.405637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.405646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.417036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.417056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.417065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.425273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.425293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.425302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.434339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.434371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.441740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.441761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.450992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.451012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.451021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.240 [2024-06-10 11:37:26.456955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.240 [2024-06-10 11:37:26.456976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.240 [2024-06-10 11:37:26.456985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.463696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.463718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.463727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.471407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.471429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.471438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.478319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.478340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.478352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.486807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.486832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.486840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.496116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.496136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.504757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.504778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.504787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.513634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.513655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.513664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.522893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.522915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.522923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.532627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.532649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.532657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.541816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.541843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.541851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.550979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.551014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.558009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.558030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.558038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.562193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.562213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.562222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.570391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.570412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.570423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.578763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.578784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.502 [2024-06-10 11:37:26.586039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.502 [2024-06-10 11:37:26.586060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.502 [2024-06-10 11:37:26.586068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.593475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.593496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.593504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.600686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.600707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.600715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.605441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.613804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.613831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.613839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.618624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.618646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.618654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.624985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.633242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.633263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.633272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.641813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.641840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.641849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.650482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.650503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.650512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.659185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.659207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.667385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.667405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.667413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.677854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.677874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.677883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.686228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.686250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.686258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.694229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.701276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.701299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.701310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.710072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.710094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.710102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.503 [2024-06-10 11:37:26.717937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.503 [2024-06-10 11:37:26.717959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.503 [2024-06-10 11:37:26.717967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.726295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.726317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.735147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.735169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.742807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.742834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.752316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.752337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.752346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.759984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.760005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.760013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.768337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.768358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.768366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.775457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.775483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.775491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.783685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.783716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.791231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.791253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.791261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.799292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.799313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.799321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.807777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.807799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.807808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.814384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.819769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.819791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.819799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.825914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.825935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.825943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.833800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.833827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.843814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.843841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.843849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.850536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.850566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.856964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.856985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.863168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.863189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.863197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.868842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.765 [2024-06-10 11:37:26.868864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.765 [2024-06-10 11:37:26.868871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.765 [2024-06-10 11:37:26.878389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.878412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.878421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.884018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.884039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.884048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.890250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.890273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.897865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.897886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.905645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.905667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.905675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.911562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.911592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.917160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.917194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.922263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.922285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.922292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.929548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.929570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.929578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.934449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.934470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.934478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.941147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.941169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.941178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.948596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.948618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.955865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.955892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.955900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.965808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.965833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.965841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.971835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.971856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.971864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.978968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.978989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.978997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.766 [2024-06-10 11:37:26.985247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:29.766 [2024-06-10 11:37:26.985269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.766 [2024-06-10 11:37:26.985276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.027 [2024-06-10 11:37:26.992310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.027 [2024-06-10 11:37:26.992331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.027 [2024-06-10 11:37:26.992339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.027 [2024-06-10 11:37:26.997360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.027 [2024-06-10 11:37:26.997381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.027 [2024-06-10 11:37:26.997389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.027 [2024-06-10 11:37:27.007937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.027 [2024-06-10 11:37:27.007960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.027 [2024-06-10 11:37:27.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.027 [2024-06-10 11:37:27.019241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.027 [2024-06-10 11:37:27.019263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.027 [2024-06-10 11:37:27.019271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.027 [2024-06-10 11:37:27.025227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.027 [2024-06-10 11:37:27.025249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.025257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.033927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.033949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.033957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.039047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.039068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.039076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.046188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.046209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.046217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.054018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.054039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.054047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.060707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.060728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.060736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.065657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.065678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.065687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.072723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.072744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.072752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.081961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.081982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.081993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.088240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.088262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.088269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.095592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.095614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.095622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.101611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.101633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.101641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.107607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.107628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.107635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.113781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.113803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.113810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.121196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.121218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.121226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.129616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.129637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.129645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.137941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.137962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.146157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.146181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.146189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.156142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.156164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.156173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.163393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.163415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.163423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.171034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.171056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.175900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.175921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.175929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.181391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.181412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.181420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.028 [2024-06-10 11:37:27.187569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.028 [2024-06-10 11:37:27.187589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.028 [2024-06-10 11:37:27.187597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.194683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.194712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.204955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.204976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.204984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.210989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.211012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.211021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.218871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.218893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.218902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.229448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.229470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.229478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.236910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.236931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.236940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.029 [2024-06-10 11:37:27.245203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.029 [2024-06-10 11:37:27.245224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.029 [2024-06-10 11:37:27.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.252026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.252057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.261030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.261051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.261059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.269076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.269097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.269106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.276595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.276627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.281166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.281187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.281195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.289451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.289472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.289480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.298278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.298300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.298308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.306640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.290 [2024-06-10 11:37:27.306660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.290 [2024-06-10 11:37:27.306668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.290 [2024-06-10 11:37:27.315473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.315493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.315501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.323520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.323542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.323550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.331594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.331616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.331624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.340021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.340041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.340048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.348234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.348254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.348262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.355084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.355105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.355113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.363794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.363815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.363827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.371647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.371668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.371676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.381710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.381731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.381739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.388788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.388808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.388816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.396437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.396458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.396466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.403953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.403974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.403982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.413829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.413850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.413862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.421201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.421222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.421230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.428098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.428129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.435170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.435191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.435200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.442382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.442403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.442411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.450119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.450140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.450148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.456536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.456557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.291 [2024-06-10 11:37:27.456565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.291 [2024-06-10 11:37:27.461838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.291 [2024-06-10 11:37:27.461859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.461867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.470382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.470404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.470412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.477569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.477594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.477602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.486436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.486457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.486465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.492854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.492875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.492883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.501420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.501449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.505966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.505987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.505995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.292 [2024-06-10 11:37:27.513508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.292 [2024-06-10 11:37:27.513530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.292 [2024-06-10 11:37:27.513538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.520968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.520989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.528856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.528878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.534661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.534683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.534691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.540657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.540679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.540687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.546919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.546940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.546948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.553321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.553343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.553351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.559059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.559080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.559088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.569061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.569083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.569091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.577947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.577968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.577976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.589534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.589555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.589563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.601038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.601067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.610444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.610477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.615726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.620263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.620293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.624638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.624658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.624667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.631466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.553 [2024-06-10 11:37:27.631487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.553 [2024-06-10 11:37:27.631496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.553 [2024-06-10 11:37:27.639302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.639323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.639331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.648533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.648555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.648564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.655151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.655176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.655184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.660813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.660840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.660848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.667261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.667285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.667293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.669876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.669898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.669906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.677327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.677356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.686445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.686467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.686475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.693286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.693308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.693316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.700957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.700978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.700986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.711982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.712003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.712011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.718484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.718506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.718514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.724461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.724482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.724490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.733188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.733209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.733217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.741818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.741846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.741854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.747426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.747445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.747453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.755786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.755808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.763319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.763341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.763349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.554 [2024-06-10 11:37:27.770699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8f10) 00:30:30.554 [2024-06-10 11:37:27.770720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.554 [2024-06-10 11:37:27.770729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.815 00:30:30.815 Latency(us) 00:30:30.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:30.815 nvme0n1 : 2.00 4060.06 507.51 0.00 0.00 3937.49 639.61 11846.89 00:30:30.815 =================================================================================================================== 00:30:30.815 Total : 4060.06 507.51 0.00 0.00 3937.49 639.61 11846.89 00:30:30.815 0 00:30:30.815 11:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.815 11:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.815 11:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.815 | .driver_specific 00:30:30.815 | .nvme_error 00:30:30.815 | .status_code 00:30:30.815 | .command_transient_transport_error' 00:30:30.815 11:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.815 11:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 262 > 0 )) 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1720677 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1720677 ']' 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1720677 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:30.815 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1720677 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1720677' 00:30:31.076 killing process with pid 1720677 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1720677 00:30:31.076 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.076 00:30:31.076 Latency(us) 00:30:31.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.076 =================================================================================================================== 00:30:31.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1720677 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1721318 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1721318 /var/tmp/bperf.sock 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1721318 ']' 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:31.076 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.076 [2024-06-10 11:37:28.232430] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:31.076 [2024-06-10 11:37:28.232485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721318 ] 00:30:31.076 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.076 [2024-06-10 11:37:28.293450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.336 [2024-06-10 11:37:28.354577] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.336 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:31.336 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:31.336 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:31.336 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.596 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.856 nvme0n1 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:31.856 11:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.118 Running I/O for 2 seconds... 00:30:32.118 [2024-06-10 11:37:29.108245] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f81e0 00:30:32.118 [2024-06-10 11:37:29.109393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.109423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.120409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190ddc00 00:30:32.118 [2024-06-10 11:37:29.121718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.121739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.131279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190dece0 00:30:32.118 [2024-06-10 11:37:29.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.132602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.142490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e95a0 00:30:32.118 [2024-06-10 11:37:29.143942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.143961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.151503] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e0ea0 00:30:32.118 [2024-06-10 11:37:29.152353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.152371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.162332] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e1f80 00:30:32.118 [2024-06-10 11:37:29.163178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.118 [2024-06-10 11:37:29.163197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.118 [2024-06-10 11:37:29.173176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e6738 00:30:32.118 [2024-06-10 11:37:29.174009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.174027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.183994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e5658 00:30:32.119 [2024-06-10 11:37:29.184824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.184842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.194839] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e4578 00:30:32.119 [2024-06-10 11:37:29.195665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.205694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190e3498 00:30:32.119 [2024-06-10 11:37:29.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.206547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.215776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190fa3a0 00:30:32.119 [2024-06-10 11:37:29.216591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.216610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.228223] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.229078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.229097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.239399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.239684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.239706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.250611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.250898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.250916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.261800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.262080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.262098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.272998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.273276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.273294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.284163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.284445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.284463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.295325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.295596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.295614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.306488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.306773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.317694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.317964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.317982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.328851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.329132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.329150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.119 [2024-06-10 11:37:29.340026] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.119 [2024-06-10 11:37:29.340337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.119 [2024-06-10 11:37:29.340355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.351253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.351523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.351541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.362415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.362670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.362688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.373592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.373859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.373876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.384799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.385081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.385099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.395978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.396246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.396264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.407160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.407442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.407460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.418316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.418610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.418628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.429504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.429777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.440657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.440933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.451837] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.452122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.452139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.462997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.463266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.463284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.474153] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.474307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.474325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.485301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.485577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.485595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.496450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.496732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.496750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.507635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.507912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.518838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.519081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.529968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.530244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.381 [2024-06-10 11:37:29.530264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.381 [2024-06-10 11:37:29.541145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.381 [2024-06-10 11:37:29.541419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.541436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.382 [2024-06-10 11:37:29.552336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.382 [2024-06-10 11:37:29.552601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.382 [2024-06-10 11:37:29.563512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.382 [2024-06-10 11:37:29.563747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.563766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.382 [2024-06-10 11:37:29.574652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.382 [2024-06-10 11:37:29.574897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.382 [2024-06-10 11:37:29.585832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.382 [2024-06-10 11:37:29.586100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.586118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.382 [2024-06-10 11:37:29.597026] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.382 [2024-06-10 11:37:29.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.382 [2024-06-10 11:37:29.597321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.608216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.642 [2024-06-10 11:37:29.608500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.642 [2024-06-10 11:37:29.608518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.619391] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.642 [2024-06-10 11:37:29.619656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.642 [2024-06-10 11:37:29.619673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.630590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.642 [2024-06-10 11:37:29.630837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.642 [2024-06-10 11:37:29.630858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.641775] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.642 [2024-06-10 11:37:29.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.642 [2024-06-10 11:37:29.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.652985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.642 [2024-06-10 11:37:29.653277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.642 [2024-06-10 11:37:29.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.642 [2024-06-10 11:37:29.664177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.664453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.664471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.675348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.675637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.675655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.686505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.686748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.686766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.697680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.697966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.697984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.708812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.709088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.709106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.720045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.720196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.720213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.731178] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.731453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.731471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.742392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.742681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.742699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.753550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.753835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.764709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.764982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.775859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.776121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.776138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.787071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.787335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.787352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.798215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.798481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.798505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.809394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.809661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.809678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.820604] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.820892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.831809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.832057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.832075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.843025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.843280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.843299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.854175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.854438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.854456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.643 [2024-06-10 11:37:29.865361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.643 [2024-06-10 11:37:29.865660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.643 [2024-06-10 11:37:29.865677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.876567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.876839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.876856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.887949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.888227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.888245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.899093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.899392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.899410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.910280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.910521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.910539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.921427] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.921691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.921712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.932602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.932895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.932913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.943796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.944070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.944088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.954961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.955221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.955239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.966095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.966349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.966367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.904 [2024-06-10 11:37:29.977299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.904 [2024-06-10 11:37:29.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.904 [2024-06-10 11:37:29.977572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:29.988463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:29.988718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:29.988735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:29.999593] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:29.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:29.999856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.011220] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.011492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.011511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.022522] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.022678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.022695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.033745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.034143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.034160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.044957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.045225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.045243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.056208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.056500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.056518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.067412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.067669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.067686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.078639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.078905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.089868] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.090128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.090146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.101175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.101467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.101485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.112352] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.112604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.112621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.905 [2024-06-10 11:37:30.123542] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:32.905 [2024-06-10 11:37:30.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.905 [2024-06-10 11:37:30.123854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.134712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.134967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.145951] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.146231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.146249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.157129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.157414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.157433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.168293] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.168576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.168594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.179561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.179867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.190783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.191064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.191081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.201961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.202246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.202264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.213142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.213415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.213438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.224315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.224620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.166 [2024-06-10 11:37:30.235493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.166 [2024-06-10 11:37:30.235781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.166 [2024-06-10 11:37:30.235799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.246670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.246956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.246974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.257876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.258155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.258173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.269021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.269308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.269326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.280207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.280477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.280495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.291387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.291662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.291680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.302559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.302848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.302866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.313761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.314039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.314057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.324969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.325250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.325267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.336143] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.336431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.336449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.347335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.347582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.347600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.358516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.358680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.358697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.369729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.370003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.167 [2024-06-10 11:37:30.380891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.167 [2024-06-10 11:37:30.381149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.167 [2024-06-10 11:37:30.381167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.392096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.392380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.392397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.403250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.403531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.403548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.414449] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.414742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.414759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.425607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.425901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.436809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.437096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.437114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.447994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.448279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.448297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.459179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.459435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.459453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.470355] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.470654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.481531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.481805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.481828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.492708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.492956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.492974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.503875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.515063] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.515365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.515383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.526291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.526542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.526567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.537442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.537688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.537707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.548626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.548900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.548918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.559784] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.560035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.560053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.570960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.571220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.429 [2024-06-10 11:37:30.571238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.429 [2024-06-10 11:37:30.582134] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.429 [2024-06-10 11:37:30.582413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.582432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.593337] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.593610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.593628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.604523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.604781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.604801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.615694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.615977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.615995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.626924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.627203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.627221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.638102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.638393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.638411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.430 [2024-06-10 11:37:30.649304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.430 [2024-06-10 11:37:30.649592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.430 [2024-06-10 11:37:30.649610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.660468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.660775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.671642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.671917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.671935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.682814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.683084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.683102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.693986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.694264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.694282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.705155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.705407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.705424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.716366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.716633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.716651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.727535] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.727803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.727826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.738739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.739016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.739035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.749887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.750139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.750156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.761051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.761321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.761339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.772205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.772477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.772495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.783320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.783635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.783653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.794551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.794797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.794814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.805721] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.805988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.806006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.816896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.817190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.817208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.828087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.828335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.828352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.839262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.839541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.850439] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.850718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.850735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.861632] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.861881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.861898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.872766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.873035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.873053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.691 [2024-06-10 11:37:30.884109] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.691 [2024-06-10 11:37:30.884407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.691 [2024-06-10 11:37:30.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.692 [2024-06-10 11:37:30.895284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.692 [2024-06-10 11:37:30.895564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.692 [2024-06-10 11:37:30.895588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.692 [2024-06-10 11:37:30.906477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.692 [2024-06-10 11:37:30.906761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.692 [2024-06-10 11:37:30.906779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.917653] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.917921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.917939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.928801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.929088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.939994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.940249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.940266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.951116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.951375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.951393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.962280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.962548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.962565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.973486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.973775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.984661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.984915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.984932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:30.995841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:30.996113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:30.996130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.006991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.007247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.007264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.018162] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.018409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.018427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.029412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.029692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.029709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.040609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.040876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.040893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.051812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.052074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.052091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.063002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.952 [2024-06-10 11:37:31.063287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.952 [2024-06-10 11:37:31.063305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.952 [2024-06-10 11:37:31.074133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.953 [2024-06-10 11:37:31.074379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.953 [2024-06-10 11:37:31.074396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.953 [2024-06-10 11:37:31.085325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.953 [2024-06-10 11:37:31.085592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.953 [2024-06-10 11:37:31.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.953 [2024-06-10 11:37:31.096482] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8306e0) with pdu=0x2000190f57b0 00:30:33.953 [2024-06-10 11:37:31.096753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.953 [2024-06-10 11:37:31.096771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:33.953 00:30:33.953 Latency(us) 00:30:33.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.953 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.953 nvme0n1 : 2.01 22829.36 89.18 0.00 0.00 5595.46 2860.90 12351.02 00:30:33.953 =================================================================================================================== 00:30:33.953 Total : 22829.36 89.18 0.00 0.00 5595.46 2860.90 12351.02 00:30:33.953 0 00:30:33.953 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:33.953 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:33.953 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:33.953 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:33.953 | .driver_specific 00:30:33.953 | .nvme_error 00:30:33.953 | .status_code 00:30:33.953 | .command_transient_transport_error' 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 179 > 0 )) 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1721318 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1721318 ']' 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1721318 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1721318 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1721318' 00:30:34.212 killing process with pid 1721318 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1721318 00:30:34.212 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.212 00:30:34.212 Latency(us) 00:30:34.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.212 =================================================================================================================== 00:30:34.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.212 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1721318 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1721915 00:30:34.472 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1721915 /var/tmp/bperf.sock 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1721915 ']' 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:34.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:34.473 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.473 [2024-06-10 11:37:31.550048] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:34.473 [2024-06-10 11:37:31.550097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721915 ] 00:30:34.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:34.473 Zero copy mechanism will not be used. 00:30:34.473 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.473 [2024-06-10 11:37:31.611493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.473 [2024-06-10 11:37:31.670041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.782 11:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.354 nvme0n1 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:35.354 11:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:35.354 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:35.354 Zero copy mechanism will not be used. 00:30:35.354 Running I/O for 2 seconds... 00:30:35.354 [2024-06-10 11:37:32.448366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.354 [2024-06-10 11:37:32.448787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.354 [2024-06-10 11:37:32.448820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.354 [2024-06-10 11:37:32.458762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.354 [2024-06-10 11:37:32.459159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.354 [2024-06-10 11:37:32.459182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.354 [2024-06-10 11:37:32.468253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.354 [2024-06-10 11:37:32.468507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.354 [2024-06-10 11:37:32.468528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.477917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.478273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.478293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.487544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.487906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.496654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.497100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.497120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.505713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.506066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.506086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.518158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.518503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.518522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.530125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.530472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.530498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.541036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.541321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.541340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.552582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.552853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.564685] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.565047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.565067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.355 [2024-06-10 11:37:32.577290] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.355 [2024-06-10 11:37:32.577660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.355 [2024-06-10 11:37:32.577679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.589433] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.589687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.589708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.597970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.598333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.598352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.605586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.605838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.605858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.610980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.611330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.611349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.616666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.623760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.624107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.624127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.631677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.631924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.631950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.638961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.639283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.639302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.645263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.645628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.645647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.651534] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.651883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.651903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.657886] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.658250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.658270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.663731] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.663981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.664001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.669771] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.670020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.676504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.676745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.676764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.683123] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.690073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.690311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.690329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-06-10 11:37:32.695437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.617 [2024-06-10 11:37:32.695678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-06-10 11:37:32.695698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.702165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.702515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.702535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.708667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.709022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.709042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.716202] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.716558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.716578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.723056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.723298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.723317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.728441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.728812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.736688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.737049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.737068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.744767] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.745120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.745140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.753741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.754078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.760406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.760765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.760784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.768608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.768872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.768890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.776964] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.777328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.777348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.783990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.784349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.784368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.789649] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.789998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.790018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.796820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.797097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.804970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.805222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.805245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.813942] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.814305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.814324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.820593] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.820679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.820697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.826372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.826617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.826637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.831908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.832246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.832265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.618 [2024-06-10 11:37:32.838161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.618 [2024-06-10 11:37:32.838493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.618 [2024-06-10 11:37:32.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.879 [2024-06-10 11:37:32.842847] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.843088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.843107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.847838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.848075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.848099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.853491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.853861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.853881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.859587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.859832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.859851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.865430] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.865795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.865815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.871095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.871394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.871414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.877168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.877392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.877410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.884794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.885056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.885076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.890842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.891187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.891206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.896311] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.896668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.896688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.901736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.902090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.902113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.907385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.907629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.907646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.913904] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.914144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.921935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.922282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.922302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.931039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.931380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.931399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.940223] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.940570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.940589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.951209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.961489] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.961599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.974087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.974374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.986792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.987163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.987183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:32.998850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:32.999196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:32.999214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:33.010867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:33.011252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:33.011272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.880 [2024-06-10 11:37:33.021125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.880 [2024-06-10 11:37:33.021482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.880 [2024-06-10 11:37:33.021501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.033016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.033387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.033407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.044411] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.044806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.044829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.054615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.054974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.054994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.063330] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.063684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.063703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.071512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.071882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.077307] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.077666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.077686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.082694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.083051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.087851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.088183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.088202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.095432] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.095775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.095795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.881 [2024-06-10 11:37:33.102317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:35.881 [2024-06-10 11:37:33.102554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.881 [2024-06-10 11:37:33.102574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.106758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.107002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.107021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.111027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.111268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.111288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.115290] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.115528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.115548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.122729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.123240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.123260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.131804] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.132255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.132275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.139750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.140103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.140122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.148877] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.149232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.149251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.157154] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.157418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.164330] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.164696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.173411] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.173783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.173803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.180051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.180393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.180412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.187381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.187717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.187737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.194071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.194311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.194330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.202747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.203288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.203308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.208925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.209282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.209301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.217366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.217728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.217747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.223105] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.223195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.223212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.230948] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.142 [2024-06-10 11:37:33.231293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.142 [2024-06-10 11:37:33.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.142 [2024-06-10 11:37:33.240654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.241009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.241028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.248123] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.248367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.248385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.254815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.255165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.255188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.261159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.261517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.261536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.270340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.270699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.270718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.278908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.279269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.279288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.287452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.287696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.287716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.296453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.296801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.296820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.304807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.305173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.305192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.315920] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.316423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.316443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.327566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.327916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.327935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.339196] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.339575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.339594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.350692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.350952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.350971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.143 [2024-06-10 11:37:33.359271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.143 [2024-06-10 11:37:33.359667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.143 [2024-06-10 11:37:33.359686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.367417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.404 [2024-06-10 11:37:33.367833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.404 [2024-06-10 11:37:33.367852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.374779] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.404 [2024-06-10 11:37:33.375134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.404 [2024-06-10 11:37:33.375153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.380476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.404 [2024-06-10 11:37:33.380828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.404 [2024-06-10 11:37:33.380847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.389667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.404 [2024-06-10 11:37:33.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.404 [2024-06-10 11:37:33.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.395740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.404 [2024-06-10 11:37:33.396099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.404 [2024-06-10 11:37:33.396118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.404 [2024-06-10 11:37:33.402537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.402881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.402900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.407995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.408236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.408255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.414563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.414804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.414831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.419707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.420076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.420095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.424662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.424929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.430572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.430936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.430955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.436675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.436926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.436944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.442644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.442887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.442907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.447204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.447560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.447580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.452148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.452491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.452514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.460090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.460426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.460446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.464788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.465028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.465048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.472265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.472612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.472631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.481988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.482355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.490996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.491083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.491100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.498158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.498507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.498527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.505019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.505380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.505400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.512099] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.512347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.512366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.517997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.518372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.524588] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.524833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.524853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.532213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.532305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.532322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.539125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.539366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.539386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.548493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.405 [2024-06-10 11:37:33.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.405 [2024-06-10 11:37:33.548874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.405 [2024-06-10 11:37:33.555764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.556276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.556296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.564583] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.564655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.564671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.573113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.573476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.573497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.580607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.580956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.580975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.589743] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.590130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.590149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.600781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.601142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.601162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.613208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.613586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.406 [2024-06-10 11:37:33.625770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.406 [2024-06-10 11:37:33.626161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.406 [2024-06-10 11:37:33.626180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.639157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.639534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.639554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.649617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.649882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.649901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.657305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.657675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.666285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.666361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.666378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.674579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.674670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.674690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.684264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.684629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.684648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.692003] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.692355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.692374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.699808] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.700178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.700197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.706668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.707029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.707049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.714310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.714665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.714685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.722167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.722533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.722552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.729051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.729398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.729418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.737171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.737531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.737550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.745454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.745792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.667 [2024-06-10 11:37:33.745812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.667 [2024-06-10 11:37:33.753738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.667 [2024-06-10 11:37:33.754102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.754122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.759976] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.760222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.760242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.769029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.769120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.779718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.780062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.780082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.787576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.787928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.797060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.797424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.797443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.807570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.807864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.818959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.819323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.819342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.828184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.828312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.828330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.837937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.838308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.847718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.848082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.848101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.857610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.857976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.857995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.866012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.866374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.866393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.874694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.875052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.875072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.668 [2024-06-10 11:37:33.883279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.668 [2024-06-10 11:37:33.883659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.668 [2024-06-10 11:37:33.883679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.891188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.891556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.891576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.898718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.899074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.899100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.908210] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.908575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.908595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.917159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.917517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.917536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.925689] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.926018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.926038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.932165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.932515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.932535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.940138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.929 [2024-06-10 11:37:33.940499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.929 [2024-06-10 11:37:33.940518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.929 [2024-06-10 11:37:33.949506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.949884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.949903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.957267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.957380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.957398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.964066] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.964417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.964438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.971905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.972323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.972342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.980851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.981201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.981220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.989091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.989444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.989464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:33.999099] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:33.999201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:33.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.010642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.011031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.011050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.019578] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.019945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.019964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.028746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.029104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.029124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.040328] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.040637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.040656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.049937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.050300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.050320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.059231] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.059588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.059608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.068279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.068352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.068369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.077762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.078097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.078116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.084554] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.084902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.084922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.093479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.093841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.093860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.100565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.100720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.100738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.106874] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.107130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.107150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.112883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.113208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.113227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.119971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.120310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.120332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.126979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.127342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.930 [2024-06-10 11:37:34.134537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.930 [2024-06-10 11:37:34.134875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.930 [2024-06-10 11:37:34.134894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.931 [2024-06-10 11:37:34.141999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.931 [2024-06-10 11:37:34.142360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-06-10 11:37:34.142379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.931 [2024-06-10 11:37:34.148218] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:36.931 [2024-06-10 11:37:34.148517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.931 [2024-06-10 11:37:34.148538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.156189] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.156536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.156556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.162477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.162900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.162920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.169994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.170352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.170371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.179334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.179583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.179602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.188156] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.188627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.188648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.196523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.196891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.196910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.204426] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.204773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.204793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.213616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.214002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.214022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.223296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.223617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.223637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.233265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.233623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.233642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.242047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.242441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.242461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.251802] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.252053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.192 [2024-06-10 11:37:34.252072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.192 [2024-06-10 11:37:34.260259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.192 [2024-06-10 11:37:34.260411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.260432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.269137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.269464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.269484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.278630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.279072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.279091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.287820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.288234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.288253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.296816] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.297229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.297248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.306597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.306995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.307015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.316090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.316488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.316508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.326167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.326417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.326437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.335278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.335705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.335725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.344118] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.344500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.344519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.353079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.353552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.353573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.363371] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.363919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.363939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.373644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.373982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.374001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.381566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.381992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.382011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.392157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.392691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.404816] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.193 [2024-06-10 11:37:34.414535] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.193 [2024-06-10 11:37:34.414896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.193 [2024-06-10 11:37:34.414915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.454 [2024-06-10 11:37:34.423563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.454 [2024-06-10 11:37:34.423944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.454 [2024-06-10 11:37:34.423963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.454 [2024-06-10 11:37:34.433677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.454 [2024-06-10 11:37:34.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.454 [2024-06-10 11:37:34.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.454 [2024-06-10 11:37:34.443016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x830a20) with pdu=0x2000190fef90 00:30:37.454 [2024-06-10 11:37:34.443168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.454 [2024-06-10 11:37:34.443186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.454 00:30:37.454 Latency(us) 00:30:37.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.454 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:37.454 nvme0n1 : 2.00 3777.18 472.15 0.00 0.00 4228.41 2016.49 13006.38 00:30:37.454 =================================================================================================================== 00:30:37.454 Total : 3777.18 472.15 0.00 0.00 4228.41 2016.49 13006.38 00:30:37.454 0 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:37.454 | .driver_specific 00:30:37.454 | .nvme_error 00:30:37.454 | .status_code 00:30:37.454 | .command_transient_transport_error' 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 244 > 0 )) 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1721915 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1721915 ']' 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1721915 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:37.454 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1721915 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1721915' 00:30:37.716 killing process with pid 1721915 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1721915 00:30:37.716 Received shutdown signal, test time was about 2.000000 seconds 00:30:37.716 00:30:37.716 Latency(us) 00:30:37.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.716 =================================================================================================================== 00:30:37.716 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1721915 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1720025 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1720025 ']' 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1720025 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1720025 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1720025' 00:30:37.716 killing process with pid 1720025 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1720025 00:30:37.716 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1720025 00:30:37.976 00:30:37.976 real 0m15.042s 00:30:37.976 user 0m29.619s 00:30:37.976 sys 0m3.347s 00:30:37.976 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:37.976 11:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.976 ************************************ 00:30:37.976 END TEST nvmf_digest_error 00:30:37.976 ************************************ 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.976 rmmod nvme_tcp 00:30:37.976 rmmod nvme_fabrics 00:30:37.976 rmmod nvme_keyring 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1720025 ']' 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1720025 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1720025 ']' 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1720025 00:30:37.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1720025) - No such process 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1720025 is not found' 00:30:37.976 Process with pid 1720025 is not found 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.976 11:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.520 11:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.520 00:30:40.520 real 0m41.117s 00:30:40.520 user 1m1.918s 00:30:40.520 sys 0m13.199s 00:30:40.520 11:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:40.520 11:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.520 ************************************ 00:30:40.520 END TEST nvmf_digest 00:30:40.520 ************************************ 00:30:40.520 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:30:40.520 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:30:40.520 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:30:40.520 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:40.520 11:37:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:40.520 11:37:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:40.520 11:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.520 ************************************ 00:30:40.520 START TEST nvmf_bdevperf 00:30:40.520 ************************************ 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:40.520 * Looking for test storage... 00:30:40.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.520 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.521 11:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.707 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:48.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:48.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:48.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:48.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:30:48.708 00:30:48.708 --- 10.0.0.2 ping statistics --- 00:30:48.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.708 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:48.708 00:30:48.708 --- 10.0.0.1 ping statistics --- 00:30:48.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.708 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1726794 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1726794 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1726794 ']' 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:48.708 11:37:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.708 [2024-06-10 11:37:45.462236] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:48.708 [2024-06-10 11:37:45.462299] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.708 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.708 [2024-06-10 11:37:45.539161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.709 [2024-06-10 11:37:45.610857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.709 [2024-06-10 11:37:45.610894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.709 [2024-06-10 11:37:45.610901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.709 [2024-06-10 11:37:45.610907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.709 [2024-06-10 11:37:45.610912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.709 [2024-06-10 11:37:45.611047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.709 [2024-06-10 11:37:45.611252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.709 [2024-06-10 11:37:45.611253] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 [2024-06-10 11:37:46.329941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 Malloc0 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.300 [2024-06-10 11:37:46.393804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.300 { 00:30:49.300 "params": { 00:30:49.300 "name": "Nvme$subsystem", 00:30:49.300 "trtype": "$TEST_TRANSPORT", 00:30:49.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.300 "adrfam": "ipv4", 00:30:49.300 "trsvcid": "$NVMF_PORT", 00:30:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.300 "hdgst": ${hdgst:-false}, 00:30:49.300 "ddgst": ${ddgst:-false} 00:30:49.300 }, 00:30:49.300 "method": "bdev_nvme_attach_controller" 00:30:49.300 } 00:30:49.300 EOF 00:30:49.300 )") 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:49.300 11:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:49.300 "params": { 00:30:49.300 "name": "Nvme1", 00:30:49.300 "trtype": "tcp", 00:30:49.300 "traddr": "10.0.0.2", 00:30:49.300 "adrfam": "ipv4", 00:30:49.300 "trsvcid": "4420", 00:30:49.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:49.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:49.300 "hdgst": false, 00:30:49.300 "ddgst": false 00:30:49.300 }, 00:30:49.300 "method": "bdev_nvme_attach_controller" 00:30:49.300 }' 00:30:49.300 [2024-06-10 11:37:46.448536] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:49.300 [2024-06-10 11:37:46.448580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727055 ] 00:30:49.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.560 [2024-06-10 11:37:46.528186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.560 [2024-06-10 11:37:46.589538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.560 Running I/O for 1 seconds... 00:30:50.942 00:30:50.942 Latency(us) 00:30:50.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.942 Verification LBA range: start 0x0 length 0x4000 00:30:50.942 Nvme1n1 : 1.01 9770.63 38.17 0.00 0.00 13043.25 2634.04 15728.64 00:30:50.942 =================================================================================================================== 00:30:50.942 Total : 9770.63 38.17 0.00 0.00 13043.25 2634.04 15728.64 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1727361 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.942 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.942 { 00:30:50.942 "params": { 00:30:50.943 "name": "Nvme$subsystem", 00:30:50.943 "trtype": "$TEST_TRANSPORT", 00:30:50.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.943 "adrfam": "ipv4", 00:30:50.943 "trsvcid": "$NVMF_PORT", 00:30:50.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.943 "hdgst": ${hdgst:-false}, 00:30:50.943 "ddgst": ${ddgst:-false} 00:30:50.943 }, 00:30:50.943 "method": "bdev_nvme_attach_controller" 00:30:50.943 } 00:30:50.943 EOF 00:30:50.943 )") 00:30:50.943 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:50.943 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:50.943 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:50.943 11:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.943 "params": { 00:30:50.943 "name": "Nvme1", 00:30:50.943 "trtype": "tcp", 00:30:50.943 "traddr": "10.0.0.2", 00:30:50.943 "adrfam": "ipv4", 00:30:50.943 "trsvcid": "4420", 00:30:50.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.943 "hdgst": false, 00:30:50.943 "ddgst": false 00:30:50.943 }, 00:30:50.943 "method": "bdev_nvme_attach_controller" 00:30:50.943 }' 00:30:50.943 [2024-06-10 11:37:47.922349] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:50.943 [2024-06-10 11:37:47.922400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727361 ] 00:30:50.943 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.943 [2024-06-10 11:37:48.004050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.943 [2024-06-10 11:37:48.067126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.203 Running I/O for 15 seconds... 00:30:53.746 11:37:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1726794 00:30:53.746 11:37:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:53.746 [2024-06-10 11:37:50.891187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.746 [2024-06-10 11:37:50.891403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.746 [2024-06-10 11:37:50.891760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.746 [2024-06-10 11:37:50.891769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.891990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.891997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.747 [2024-06-10 11:37:50.892276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.747 [2024-06-10 11:37:50.892284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.748 [2024-06-10 11:37:50.892819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.748 [2024-06-10 11:37:50.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.892994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.749 [2024-06-10 11:37:50.893201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.749 [2024-06-10 11:37:50.893298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa035c0 is same with the state(5) to be set 00:30:53.749 [2024-06-10 11:37:50.893315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.749 [2024-06-10 11:37:50.893321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.749 [2024-06-10 11:37:50.893327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62296 len:8 PRP1 0x0 PRP2 0x0 00:30:53.749 [2024-06-10 11:37:50.893339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893379] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa035c0 was disconnected and freed. reset controller. 00:30:53.749 [2024-06-10 11:37:50.893421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.749 [2024-06-10 11:37:50.893431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.749 [2024-06-10 11:37:50.893439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.750 [2024-06-10 11:37:50.893445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.750 [2024-06-10 11:37:50.893454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.750 [2024-06-10 11:37:50.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.750 [2024-06-10 11:37:50.893470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.750 [2024-06-10 11:37:50.893476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.750 [2024-06-10 11:37:50.893483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.896744] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.896764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.897382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.897399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.897407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.897609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.897809] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.897817] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.897835] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.750 [2024-06-10 11:37:50.901069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.750 [2024-06-10 11:37:50.910436] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.911073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.911109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.911120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.911344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.911547] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.911556] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.911567] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.750 [2024-06-10 11:37:50.914819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.750 [2024-06-10 11:37:50.923993] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.924460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.924478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.924485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.924685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.924893] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.924902] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.924908] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.750 [2024-06-10 11:37:50.928151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.750 [2024-06-10 11:37:50.937504] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.938025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.938040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.938047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.938247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.938446] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.938453] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.938460] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.750 [2024-06-10 11:37:50.941693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.750 [2024-06-10 11:37:50.951052] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.951579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.951594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.951602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.951801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.952007] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.952016] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.952024] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.750 [2024-06-10 11:37:50.955257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.750 [2024-06-10 11:37:50.964610] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.750 [2024-06-10 11:37:50.965120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.750 [2024-06-10 11:37:50.965139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:53.750 [2024-06-10 11:37:50.965147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:53.750 [2024-06-10 11:37:50.965347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:53.750 [2024-06-10 11:37:50.965546] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.750 [2024-06-10 11:37:50.965555] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.750 [2024-06-10 11:37:50.965561] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:50.968797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:50.978160] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:50.978701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:50.978717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:50.978724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:50.978931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:50.979132] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:50.979140] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:50.979146] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:50.982381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:50.991735] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:50.992305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:50.992321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:50.992329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:50.992528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:50.992728] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:50.992735] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:50.992742] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:50.995987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:51.005353] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:51.005937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:51.005981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:51.005993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:51.006218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:51.006427] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:51.006437] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:51.006443] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:51.009692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:51.018871] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:51.019455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:51.019477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:51.019485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:51.019686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:51.019894] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:51.019904] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:51.019911] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:51.023147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:51.032516] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:51.033158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:51.033205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:51.033216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:51.033444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:51.033649] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:51.033657] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:51.033664] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:51.036919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:51.046099] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:51.046689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:51.046712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.013 [2024-06-10 11:37:51.046720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.013 [2024-06-10 11:37:51.046928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.013 [2024-06-10 11:37:51.047132] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.013 [2024-06-10 11:37:51.047141] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.013 [2024-06-10 11:37:51.047148] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.013 [2024-06-10 11:37:51.050399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.013 [2024-06-10 11:37:51.059572] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.013 [2024-06-10 11:37:51.060135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.013 [2024-06-10 11:37:51.060157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.060165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.060367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.060570] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.060580] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.060587] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.063839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.073197] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.073789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.073809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.073817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.074027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.074231] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.074241] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.074248] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.077494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.086666] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.087269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.087292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.087299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.087503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.087706] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.087716] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.087722] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.090973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.100150] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.100630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.100650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.100664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.100872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.101074] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.101085] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.101092] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.104333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.113709] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.114243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.114264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.114272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.114474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.114675] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.114694] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.114701] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.117950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.127328] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.127915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.127936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.127944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.128146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.128348] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.128357] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.128364] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.131606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.140981] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.141498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.141518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.141526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.141727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.141935] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.141950] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.141957] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.145206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.154572] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.155141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.155162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.155170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.155371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.155572] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.155582] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.155589] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.158845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.014 [2024-06-10 11:37:51.168328] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.014 [2024-06-10 11:37:51.168925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.014 [2024-06-10 11:37:51.168946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.014 [2024-06-10 11:37:51.168953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.014 [2024-06-10 11:37:51.169156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.014 [2024-06-10 11:37:51.169358] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.014 [2024-06-10 11:37:51.169367] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.014 [2024-06-10 11:37:51.169374] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.014 [2024-06-10 11:37:51.172619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.015 [2024-06-10 11:37:51.181990] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.015 [2024-06-10 11:37:51.182554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.015 [2024-06-10 11:37:51.182573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.015 [2024-06-10 11:37:51.182580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.015 [2024-06-10 11:37:51.182782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.015 [2024-06-10 11:37:51.182993] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.015 [2024-06-10 11:37:51.183003] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.015 [2024-06-10 11:37:51.183010] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.015 [2024-06-10 11:37:51.186250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.015 [2024-06-10 11:37:51.195614] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.015 [2024-06-10 11:37:51.196059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.015 [2024-06-10 11:37:51.196078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.015 [2024-06-10 11:37:51.196086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.015 [2024-06-10 11:37:51.196288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.015 [2024-06-10 11:37:51.196489] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.015 [2024-06-10 11:37:51.196500] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.015 [2024-06-10 11:37:51.196506] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.015 [2024-06-10 11:37:51.199748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.015 [2024-06-10 11:37:51.209115] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.015 [2024-06-10 11:37:51.209783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.015 [2024-06-10 11:37:51.209851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.015 [2024-06-10 11:37:51.209864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.015 [2024-06-10 11:37:51.210098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.015 [2024-06-10 11:37:51.210305] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.015 [2024-06-10 11:37:51.210313] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.015 [2024-06-10 11:37:51.210320] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.015 [2024-06-10 11:37:51.213576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.015 [2024-06-10 11:37:51.222758] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.015 [2024-06-10 11:37:51.223384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.015 [2024-06-10 11:37:51.223410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.015 [2024-06-10 11:37:51.223418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.015 [2024-06-10 11:37:51.223621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.015 [2024-06-10 11:37:51.223832] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.015 [2024-06-10 11:37:51.223842] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.015 [2024-06-10 11:37:51.223849] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.015 [2024-06-10 11:37:51.227117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.277 [2024-06-10 11:37:51.236302] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.277 [2024-06-10 11:37:51.236900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-06-10 11:37:51.236923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.277 [2024-06-10 11:37:51.236931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.277 [2024-06-10 11:37:51.237140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.277 [2024-06-10 11:37:51.237340] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.277 [2024-06-10 11:37:51.237349] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.277 [2024-06-10 11:37:51.237356] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.277 [2024-06-10 11:37:51.240599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.277 [2024-06-10 11:37:51.249779] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.277 [2024-06-10 11:37:51.250375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-06-10 11:37:51.250396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.277 [2024-06-10 11:37:51.250403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.277 [2024-06-10 11:37:51.250605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.277 [2024-06-10 11:37:51.250806] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.277 [2024-06-10 11:37:51.250815] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.277 [2024-06-10 11:37:51.250827] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.277 [2024-06-10 11:37:51.254069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.277 [2024-06-10 11:37:51.263244] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.277 [2024-06-10 11:37:51.263834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-06-10 11:37:51.263854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.277 [2024-06-10 11:37:51.263862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.277 [2024-06-10 11:37:51.264064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.277 [2024-06-10 11:37:51.264265] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.277 [2024-06-10 11:37:51.264273] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.277 [2024-06-10 11:37:51.264280] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.277 [2024-06-10 11:37:51.267521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.277 [2024-06-10 11:37:51.276895] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.277 [2024-06-10 11:37:51.277455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-06-10 11:37:51.277476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.277 [2024-06-10 11:37:51.277484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.277 [2024-06-10 11:37:51.277686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.277 [2024-06-10 11:37:51.277895] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.277905] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.277918] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.281165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.290534] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.291074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.291095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.291103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.291305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.291505] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.291516] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.291523] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.294764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.304142] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.304733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.304753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.304760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.304968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.305171] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.305181] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.305188] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.308428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.317785] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.318379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.318400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.318408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.318609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.318811] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.318820] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.318834] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.322081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.331264] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.331811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.331845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.331854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.332056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.332260] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.332269] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.332276] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.335521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.344877] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.345445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.345464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.345472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.345673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.345882] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.345891] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.345899] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.349137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.358498] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.359065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.359085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.359092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.359293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.359494] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.359503] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.359510] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.362748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.372103] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.372669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.372688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.372695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.372902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.373111] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.373120] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.373126] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.376363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.385718] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.386304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.386324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.386332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.386533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.278 [2024-06-10 11:37:51.386735] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.278 [2024-06-10 11:37:51.386744] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.278 [2024-06-10 11:37:51.386751] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.278 [2024-06-10 11:37:51.389999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.278 [2024-06-10 11:37:51.399352] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.278 [2024-06-10 11:37:51.399927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.278 [2024-06-10 11:37:51.399986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.278 [2024-06-10 11:37:51.399998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.278 [2024-06-10 11:37:51.400232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.400439] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.400447] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.400455] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.403718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.412890] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.413370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.413397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.413405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.413608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.413810] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.413819] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.413835] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.417090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.426444] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.426928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.426952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.426960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.427162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.427364] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.427372] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.427379] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.430639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.440001] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.440554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.440576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.440584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.440787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.440996] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.441006] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.441013] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.444255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.453619] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.454315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.454373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.454385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.454619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.454836] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.454846] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.454853] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.458108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.467101] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.467607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.467633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.467648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.467860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.468064] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.468072] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.468079] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.471327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.480692] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.481241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.481262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.481270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.481472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.481673] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.481683] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.481690] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.484939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.279 [2024-06-10 11:37:51.494298] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.279 [2024-06-10 11:37:51.494888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.279 [2024-06-10 11:37:51.494911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.279 [2024-06-10 11:37:51.494918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.279 [2024-06-10 11:37:51.495121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.279 [2024-06-10 11:37:51.495322] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.279 [2024-06-10 11:37:51.495333] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.279 [2024-06-10 11:37:51.495340] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.279 [2024-06-10 11:37:51.498586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.507765] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.508329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.508350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.508358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.508560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.508762] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.508784] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.541 [2024-06-10 11:37:51.508790] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.541 [2024-06-10 11:37:51.512034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.521389] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.521949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.521971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.521979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.522181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.522382] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.522391] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.541 [2024-06-10 11:37:51.522398] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.541 [2024-06-10 11:37:51.525638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.535015] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.535621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.535680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.535693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.535938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.536146] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.536156] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.541 [2024-06-10 11:37:51.536164] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.541 [2024-06-10 11:37:51.539418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.548590] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.549141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.549199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.549211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.549445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.549652] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.549661] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.541 [2024-06-10 11:37:51.549668] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.541 [2024-06-10 11:37:51.552935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.562101] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.562770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.562837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.562850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.563084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.563291] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.563299] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.541 [2024-06-10 11:37:51.563307] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.541 [2024-06-10 11:37:51.566564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.541 [2024-06-10 11:37:51.575731] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.541 [2024-06-10 11:37:51.576402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-06-10 11:37:51.576459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.541 [2024-06-10 11:37:51.576470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.541 [2024-06-10 11:37:51.576705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.541 [2024-06-10 11:37:51.576926] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.541 [2024-06-10 11:37:51.576936] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.576944] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.580199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.589371] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.590129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.590187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.590198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.590432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.590638] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.590647] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.590655] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.593926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.602906] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.603599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.603657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.603669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.603922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.604130] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.604139] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.604146] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.607400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.616387] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.617108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.617166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.617178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.617412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.617619] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.617627] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.617634] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.620902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.629892] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.630588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.630646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.630658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.630906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.631113] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.631122] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.631129] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.634384] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.643364] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.644098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.644155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.644167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.644401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.644608] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.644616] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.644630] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.647908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.656908] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.657621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.657679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.657692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.657936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.658143] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.658153] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.658160] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.661413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.670399] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.671087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.671141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.671153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.671384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.671591] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.671599] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.671606] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.674868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.684042] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.684729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.684780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.542 [2024-06-10 11:37:51.684791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.542 [2024-06-10 11:37:51.685028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.542 [2024-06-10 11:37:51.685234] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.542 [2024-06-10 11:37:51.685243] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.542 [2024-06-10 11:37:51.685250] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.542 [2024-06-10 11:37:51.688505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.542 [2024-06-10 11:37:51.697671] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.542 [2024-06-10 11:37:51.698321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-06-10 11:37:51.698373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.543 [2024-06-10 11:37:51.698384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.543 [2024-06-10 11:37:51.698611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.543 [2024-06-10 11:37:51.698816] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.543 [2024-06-10 11:37:51.698835] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.543 [2024-06-10 11:37:51.698842] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.543 [2024-06-10 11:37:51.702087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.543 [2024-06-10 11:37:51.711246] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.543 [2024-06-10 11:37:51.711928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-06-10 11:37:51.711971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.543 [2024-06-10 11:37:51.711982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.543 [2024-06-10 11:37:51.712205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.543 [2024-06-10 11:37:51.712409] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.543 [2024-06-10 11:37:51.712417] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.543 [2024-06-10 11:37:51.712424] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.543 [2024-06-10 11:37:51.715676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.543 [2024-06-10 11:37:51.724834] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.543 [2024-06-10 11:37:51.725369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-06-10 11:37:51.725410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.543 [2024-06-10 11:37:51.725420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.543 [2024-06-10 11:37:51.725643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.543 [2024-06-10 11:37:51.725857] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.543 [2024-06-10 11:37:51.725866] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.543 [2024-06-10 11:37:51.725873] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.543 [2024-06-10 11:37:51.729115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.543 [2024-06-10 11:37:51.738473] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.543 [2024-06-10 11:37:51.739149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-06-10 11:37:51.739190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.543 [2024-06-10 11:37:51.739200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.543 [2024-06-10 11:37:51.739422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.543 [2024-06-10 11:37:51.739630] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.543 [2024-06-10 11:37:51.739639] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.543 [2024-06-10 11:37:51.739645] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.543 [2024-06-10 11:37:51.742897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.543 [2024-06-10 11:37:51.752054] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.543 [2024-06-10 11:37:51.752611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-06-10 11:37:51.752653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.543 [2024-06-10 11:37:51.752663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.543 [2024-06-10 11:37:51.752896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.543 [2024-06-10 11:37:51.753101] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.543 [2024-06-10 11:37:51.753109] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.543 [2024-06-10 11:37:51.753116] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.543 [2024-06-10 11:37:51.756356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.804 [2024-06-10 11:37:51.765517] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.804 [2024-06-10 11:37:51.766197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.804 [2024-06-10 11:37:51.766240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.804 [2024-06-10 11:37:51.766251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.804 [2024-06-10 11:37:51.766474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.804 [2024-06-10 11:37:51.766679] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.804 [2024-06-10 11:37:51.766686] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.804 [2024-06-10 11:37:51.766693] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.804 [2024-06-10 11:37:51.769947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.804 [2024-06-10 11:37:51.779109] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.804 [2024-06-10 11:37:51.779699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.804 [2024-06-10 11:37:51.779720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.804 [2024-06-10 11:37:51.779728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.804 [2024-06-10 11:37:51.779937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.804 [2024-06-10 11:37:51.780138] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.804 [2024-06-10 11:37:51.780147] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.804 [2024-06-10 11:37:51.780154] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.804 [2024-06-10 11:37:51.783402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.804 [2024-06-10 11:37:51.792747] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.804 [2024-06-10 11:37:51.793326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.804 [2024-06-10 11:37:51.793345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.804 [2024-06-10 11:37:51.793353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.804 [2024-06-10 11:37:51.793554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.804 [2024-06-10 11:37:51.793754] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.793762] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.793768] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.797015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.806363] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.806929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.806966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.806975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.807192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.807396] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.807405] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.807412] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.810663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.819827] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.820517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.820575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.820587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.820835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.821043] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.821052] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.821060] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.824317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.833315] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.833935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.833993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.834013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.834248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.834455] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.834463] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.834470] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.837741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.846913] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.847472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.847530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.847542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.847776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.847998] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.848007] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.848014] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.851265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.860439] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.861102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.861160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.861172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.861406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.861613] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.861622] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.861630] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.864901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.874084] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.874767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.874837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.874850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.875084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.875292] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.875308] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.875316] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.878572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.887640] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.888356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.888413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.888425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.888660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.888879] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.888889] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.888897] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.892153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.901149] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.901861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.805 [2024-06-10 11:37:51.901921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.805 [2024-06-10 11:37:51.901935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.805 [2024-06-10 11:37:51.902171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.805 [2024-06-10 11:37:51.902377] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.805 [2024-06-10 11:37:51.902386] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.805 [2024-06-10 11:37:51.902393] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.805 [2024-06-10 11:37:51.905649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.805 [2024-06-10 11:37:51.914638] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.805 [2024-06-10 11:37:51.915344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.915403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.915416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.915650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.915869] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.915878] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.915886] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.919139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.928300] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.928939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.928999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.929011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.929245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.929452] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.929460] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.929467] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.932747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.941932] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.942644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.942701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.942712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.942957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.943164] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.943173] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.943180] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.946434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.955413] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.956115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.956165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.956177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.956405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.956610] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.956618] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.956625] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.959884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.969049] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.969683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.969731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.969742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.969984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.970191] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.970199] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.970206] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.973451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.982611] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.983230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.983276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.983286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.983512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.983716] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.983724] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.983731] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:51.986984] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:51.996139] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:51.996662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:51.996705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:51.996716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:51.996950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:51.997154] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:51.997163] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:51.997170] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:52.000411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:52.009760] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:52.010429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:52.010469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:52.010479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:52.010701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:52.010913] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:52.010922] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:52.010934] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.806 [2024-06-10 11:37:52.014171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.806 [2024-06-10 11:37:52.023326] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.806 [2024-06-10 11:37:52.023923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.806 [2024-06-10 11:37:52.023962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:54.806 [2024-06-10 11:37:52.023972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:54.806 [2024-06-10 11:37:52.024193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:54.806 [2024-06-10 11:37:52.024397] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.806 [2024-06-10 11:37:52.024405] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.806 [2024-06-10 11:37:52.024412] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.027660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.036827] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.037507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.037545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.037555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.037775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.037987] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.037996] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.038003] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.041235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.050387] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.050957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.050993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.051004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.051223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.051426] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.051434] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.051441] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.054680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.064023] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.064632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.064672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.064682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.064909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.065112] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.065121] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.065127] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.068362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.077518] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.078129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.078163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.078173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.078390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.078593] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.078601] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.078608] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.081851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.090995] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.091668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.091702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.091713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.091937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.092141] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.092149] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.092156] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.095389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.104529] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.105180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.105215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.105225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.105442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.105649] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.105657] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.105664] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.108906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.118060] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.118570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.118604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.118615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.118842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.119045] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.119053] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.075 [2024-06-10 11:37:52.119060] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.075 [2024-06-10 11:37:52.122291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.075 [2024-06-10 11:37:52.131678] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.075 [2024-06-10 11:37:52.132309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.075 [2024-06-10 11:37:52.132343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.075 [2024-06-10 11:37:52.132352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.075 [2024-06-10 11:37:52.132570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.075 [2024-06-10 11:37:52.132774] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.075 [2024-06-10 11:37:52.132782] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.132789] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.136033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.145192] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.145831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.145865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.145877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.146097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.146300] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.146308] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.146314] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.149557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.158707] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.159348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.159382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.159392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.159610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.159813] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.159829] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.159837] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.163069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.172221] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.172842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.172877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.172887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.173105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.173308] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.173316] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.173322] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.176563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.185721] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.186326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.186360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.186370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.186587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.186790] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.186798] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.186805] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.190044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.199187] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.199796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.199837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.199855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.200073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.200275] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.200283] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.200289] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.203522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.212686] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.213341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.213376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.213385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.213603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.213806] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.213815] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.213830] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.217063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.226211] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.226768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.226785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.226793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.226998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.227198] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.227205] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.227211] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.230435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.239771] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.240285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.240300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.240306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.240506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.240705] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.240717] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.240723] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.243954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.253283] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.253834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.253849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.253856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.254055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.254254] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.254262] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.254269] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.257499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.266833] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.267339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.267352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.267360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.267559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.267758] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.267766] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.267772] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.271003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.280336] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.076 [2024-06-10 11:37:52.280966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.076 [2024-06-10 11:37:52.281001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.076 [2024-06-10 11:37:52.281010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.076 [2024-06-10 11:37:52.281228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.076 [2024-06-10 11:37:52.281431] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.076 [2024-06-10 11:37:52.281439] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.076 [2024-06-10 11:37:52.281446] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.076 [2024-06-10 11:37:52.284684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.076 [2024-06-10 11:37:52.293852] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.336 [2024-06-10 11:37:52.294477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.336 [2024-06-10 11:37:52.294512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.336 [2024-06-10 11:37:52.294522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.336 [2024-06-10 11:37:52.294740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.336 [2024-06-10 11:37:52.294951] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.336 [2024-06-10 11:37:52.294961] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.336 [2024-06-10 11:37:52.294967] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.336 [2024-06-10 11:37:52.298200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.336 [2024-06-10 11:37:52.307359] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.336 [2024-06-10 11:37:52.308015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.336 [2024-06-10 11:37:52.308049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.336 [2024-06-10 11:37:52.308059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.336 [2024-06-10 11:37:52.308277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.336 [2024-06-10 11:37:52.308480] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.336 [2024-06-10 11:37:52.308487] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.336 [2024-06-10 11:37:52.308494] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.336 [2024-06-10 11:37:52.311733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.336 [2024-06-10 11:37:52.320881] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.336 [2024-06-10 11:37:52.321526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.336 [2024-06-10 11:37:52.321560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.336 [2024-06-10 11:37:52.321570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.321788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.321999] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.322008] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.322015] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.325247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.334411] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.335050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.335085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.335095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.335317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.335519] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.335527] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.335534] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.338777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.347932] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.348580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.348614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.348624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.348851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.349055] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.349063] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.349069] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.352303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.361453] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.362070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.362104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.362114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.362331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.362534] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.362542] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.362549] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.365788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.374932] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.375579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.375613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.375622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.375849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.376052] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.376061] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.376071] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.379303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.388485] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.389155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.389190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.389200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.389417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.389620] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.389628] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.389635] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.392876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.402020] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.402669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.402703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.402713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.402939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.403143] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.403151] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.403158] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.406392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.415540] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.416143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.416178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.416188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.416406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.416608] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.416616] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.416623] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.337 [2024-06-10 11:37:52.419862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.337 [2024-06-10 11:37:52.428997] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.337 [2024-06-10 11:37:52.429527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.337 [2024-06-10 11:37:52.429548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.337 [2024-06-10 11:37:52.429556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.337 [2024-06-10 11:37:52.429755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.337 [2024-06-10 11:37:52.429961] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.337 [2024-06-10 11:37:52.429970] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.337 [2024-06-10 11:37:52.429976] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.433214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.442551] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.443071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.443086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.443093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.443292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.443491] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.443498] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.443505] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.446731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.456063] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.456547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.456561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.456568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.456767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.456972] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.456980] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.456986] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.460211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.469538] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.470047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.470062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.470069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.470268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.470471] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.470478] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.470484] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.473710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.483039] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.483593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.483607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.483614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.483812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.484017] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.484025] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.484031] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.487255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.496575] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.497208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.497242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.497252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.497470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.497673] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.497681] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.497687] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.500925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.510068] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.510631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.510648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.510655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.510861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.511062] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.511070] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.511076] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.514310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.523657] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.524302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.524337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.524346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.524564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.524768] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.524776] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.524783] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.528025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.537199] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.537769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.537787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.338 [2024-06-10 11:37:52.537794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.338 [2024-06-10 11:37:52.538000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.338 [2024-06-10 11:37:52.538200] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.338 [2024-06-10 11:37:52.538207] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.338 [2024-06-10 11:37:52.538214] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.338 [2024-06-10 11:37:52.541446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.338 [2024-06-10 11:37:52.550791] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.338 [2024-06-10 11:37:52.551419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.338 [2024-06-10 11:37:52.551453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.339 [2024-06-10 11:37:52.551464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.339 [2024-06-10 11:37:52.551681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.339 [2024-06-10 11:37:52.551892] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.339 [2024-06-10 11:37:52.551901] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.339 [2024-06-10 11:37:52.551909] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.339 [2024-06-10 11:37:52.555142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.564298] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.564866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.564884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.564895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.565096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.565295] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.565302] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.565309] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.568537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.577882] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.578525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.578560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.578570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.578787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.578998] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.579007] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.579014] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.582247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.591393] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.592038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.592073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.592083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.592300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.592503] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.592511] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.592518] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.595754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.604904] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.605514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.605548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.605557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.605775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.605987] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.606000] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.606008] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.609240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.618389] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.618994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.619028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.619039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.619256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.619459] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.619467] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.619474] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.622711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.631869] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.632476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.632510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.632520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.632738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.632949] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.632958] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.632964] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.636198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.645347] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.645992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.646027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.646037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.646254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.646457] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.646465] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.646472] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.649711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.658876] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.659389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.659423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.600 [2024-06-10 11:37:52.659432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.600 [2024-06-10 11:37:52.659650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.600 [2024-06-10 11:37:52.659863] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.600 [2024-06-10 11:37:52.659872] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.600 [2024-06-10 11:37:52.659879] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.600 [2024-06-10 11:37:52.663110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.600 [2024-06-10 11:37:52.672448] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.600 [2024-06-10 11:37:52.673079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.600 [2024-06-10 11:37:52.673113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.673123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.673341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.673544] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.673552] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.673559] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.676796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.685942] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.686468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.686485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.686493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.686693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.686898] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.686907] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.686913] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.690145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.699479] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.700124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.700159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.700169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.700390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.700593] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.700601] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.700608] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.703847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.712989] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.713637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.713672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.713681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.713907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.714111] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.714119] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.714125] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.717358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.726507] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.727152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.727186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.727196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.727414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.727616] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.727624] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.727631] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.730871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.740034] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.740687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.740722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.740732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.740959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.741163] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.741171] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.741182] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.744416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.753571] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.754229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.754264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.754273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.754491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.754694] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.754702] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.754709] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.757950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.767095] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.767725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.767759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.767769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.767995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.768199] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.768207] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.768213] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.601 [2024-06-10 11:37:52.771447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.601 [2024-06-10 11:37:52.780593] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.601 [2024-06-10 11:37:52.781257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.601 [2024-06-10 11:37:52.781292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.601 [2024-06-10 11:37:52.781301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.601 [2024-06-10 11:37:52.781519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.601 [2024-06-10 11:37:52.781722] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.601 [2024-06-10 11:37:52.781730] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.601 [2024-06-10 11:37:52.781737] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.602 [2024-06-10 11:37:52.784978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.602 [2024-06-10 11:37:52.794129] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.602 [2024-06-10 11:37:52.794785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.602 [2024-06-10 11:37:52.794819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.602 [2024-06-10 11:37:52.794837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.602 [2024-06-10 11:37:52.795054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.602 [2024-06-10 11:37:52.795257] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.602 [2024-06-10 11:37:52.795265] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.602 [2024-06-10 11:37:52.795272] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.602 [2024-06-10 11:37:52.798505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.602 [2024-06-10 11:37:52.807647] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.602 [2024-06-10 11:37:52.808269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.602 [2024-06-10 11:37:52.808304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.602 [2024-06-10 11:37:52.808314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.602 [2024-06-10 11:37:52.808531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.602 [2024-06-10 11:37:52.808735] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.602 [2024-06-10 11:37:52.808742] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.602 [2024-06-10 11:37:52.808749] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.602 [2024-06-10 11:37:52.812016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.602 [2024-06-10 11:37:52.821181] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.602 [2024-06-10 11:37:52.821673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.602 [2024-06-10 11:37:52.821690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.602 [2024-06-10 11:37:52.821698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.602 [2024-06-10 11:37:52.821904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.602 [2024-06-10 11:37:52.822104] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.602 [2024-06-10 11:37:52.822112] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.602 [2024-06-10 11:37:52.822119] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.825348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.834693] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.835227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.835242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.835250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.835449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.835653] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.835660] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.862 [2024-06-10 11:37:52.835667] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.838900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.848228] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.848869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.848904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.848915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.849134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.849338] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.849346] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.862 [2024-06-10 11:37:52.849353] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.852592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.861743] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.862319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.862337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.862344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.862544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.862744] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.862751] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.862 [2024-06-10 11:37:52.862757] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.865990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.875332] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.875858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.875865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.876065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.876265] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.876272] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.862 [2024-06-10 11:37:52.876279] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.879689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.888849] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.889489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.889524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.889533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.889751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.889961] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.889970] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.862 [2024-06-10 11:37:52.889977] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.862 [2024-06-10 11:37:52.893211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.862 [2024-06-10 11:37:52.902362] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.862 [2024-06-10 11:37:52.902928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.862 [2024-06-10 11:37:52.902946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.862 [2024-06-10 11:37:52.902953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.862 [2024-06-10 11:37:52.903153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.862 [2024-06-10 11:37:52.903354] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.862 [2024-06-10 11:37:52.903361] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.903367] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.906595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.915934] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.916558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.916592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.916602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.916820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.917030] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.917038] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.917044] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.920278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.929432] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.929923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.929959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.929973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.930193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.930396] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.930404] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.930411] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.933658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.943005] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.943563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.943580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.943588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.943789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.943996] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.944004] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.944011] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.947239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.956651] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.957254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.957289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.957299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.957517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.957720] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.957729] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.957736] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.960977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.970134] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.970637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.970671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.970681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.970907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.971111] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.971123] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.971130] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.974363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.983702] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.984351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.863 [2024-06-10 11:37:52.984386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.863 [2024-06-10 11:37:52.984396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.863 [2024-06-10 11:37:52.984614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.863 [2024-06-10 11:37:52.984816] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.863 [2024-06-10 11:37:52.984833] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.863 [2024-06-10 11:37:52.984840] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.863 [2024-06-10 11:37:52.988077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.863 [2024-06-10 11:37:52.997231] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.863 [2024-06-10 11:37:52.997884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:52.997919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:52.997929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:52.998146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:52.998349] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:52.998357] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:52.998363] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.001603] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.010751] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.011280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.011298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.011305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.011505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.011705] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:53.011713] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:53.011719] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.014956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.024300] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.024906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.024940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.024951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.025169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.025372] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:53.025381] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:53.025388] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.028626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.037796] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.038447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.038482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.038492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.038709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.038919] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:53.038928] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:53.038935] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.042167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.051319] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.051925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.051959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.051971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.052191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.052394] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:53.052402] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:53.052409] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.055648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.064796] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.065326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.065344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.065351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.065556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.065756] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.864 [2024-06-10 11:37:53.065763] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.864 [2024-06-10 11:37:53.065769] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.864 [2024-06-10 11:37:53.069003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.864 [2024-06-10 11:37:53.078333] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.864 [2024-06-10 11:37:53.078905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.864 [2024-06-10 11:37:53.078940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:55.864 [2024-06-10 11:37:53.078951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:55.864 [2024-06-10 11:37:53.079171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:55.864 [2024-06-10 11:37:53.079374] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.865 [2024-06-10 11:37:53.079383] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.865 [2024-06-10 11:37:53.079390] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.865 [2024-06-10 11:37:53.082632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.148 [2024-06-10 11:37:53.091787] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.148 [2024-06-10 11:37:53.092313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.148 [2024-06-10 11:37:53.092330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.148 [2024-06-10 11:37:53.092338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.148 [2024-06-10 11:37:53.092537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.148 [2024-06-10 11:37:53.092737] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.148 [2024-06-10 11:37:53.092744] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.148 [2024-06-10 11:37:53.092751] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.148 [2024-06-10 11:37:53.095988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.148 [2024-06-10 11:37:53.105322] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.148 [2024-06-10 11:37:53.105916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.148 [2024-06-10 11:37:53.105950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.105962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.106180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.106384] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.106392] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.106403] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.109651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.118819] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.119461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.119495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.119505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.119723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.119934] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.119943] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.119950] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.123187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.132358] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.133002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.133037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.133047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.133264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.133467] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.133476] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.133482] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.136726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.145897] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.146545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.146580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.146589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.146806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.147018] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.147027] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.147035] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.150273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.159517] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.160142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.160177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.160187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.160405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.160608] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.160616] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.160622] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.163869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.173037] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.173553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.173569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.173577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.173777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.173983] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.173992] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.173999] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.177231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.186576] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.187066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.187082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.187088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.187288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.187487] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.187495] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.187501] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.190732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.200081] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.200618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.200632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.200639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.200844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.201048] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.201056] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.201062] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.204296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.149 [2024-06-10 11:37:53.213649] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.149 [2024-06-10 11:37:53.214297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.149 [2024-06-10 11:37:53.214331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.149 [2024-06-10 11:37:53.214341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.149 [2024-06-10 11:37:53.214558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.149 [2024-06-10 11:37:53.214761] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.149 [2024-06-10 11:37:53.214769] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.149 [2024-06-10 11:37:53.214776] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.149 [2024-06-10 11:37:53.218015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.227168] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.227695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.227713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.227720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.227925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.228125] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.228132] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.228138] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.231369] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.240716] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.241237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.241252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.241259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.241458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.241657] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.241665] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.241671] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.244907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.254241] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.254752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.254786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.254797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.255021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.255225] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.255233] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.255240] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.258474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.267810] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.268380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.268397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.268404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.268604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.268803] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.268810] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.268817] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.272047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.281396] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.282040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.282075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.282085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.282302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.282506] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.282513] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.282520] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.285757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.294910] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.295545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.295579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.295594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.295811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.296022] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.296031] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.296038] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.299270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.308419] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.309063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.309098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.309108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.309325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.309529] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.309538] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.309544] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.312780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.150 [2024-06-10 11:37:53.321938] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.150 [2024-06-10 11:37:53.322528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.150 [2024-06-10 11:37:53.322562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.150 [2024-06-10 11:37:53.322572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.150 [2024-06-10 11:37:53.322790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.150 [2024-06-10 11:37:53.323000] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.150 [2024-06-10 11:37:53.323008] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.150 [2024-06-10 11:37:53.323016] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.150 [2024-06-10 11:37:53.326248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.151 [2024-06-10 11:37:53.335409] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.151 [2024-06-10 11:37:53.336038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.151 [2024-06-10 11:37:53.336073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.151 [2024-06-10 11:37:53.336083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.151 [2024-06-10 11:37:53.336301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.151 [2024-06-10 11:37:53.336503] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.151 [2024-06-10 11:37:53.336515] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.151 [2024-06-10 11:37:53.336522] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.151 [2024-06-10 11:37:53.339759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.151 [2024-06-10 11:37:53.348911] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.151 [2024-06-10 11:37:53.349485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.151 [2024-06-10 11:37:53.349502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.151 [2024-06-10 11:37:53.349509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.151 [2024-06-10 11:37:53.349709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.151 [2024-06-10 11:37:53.349913] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.151 [2024-06-10 11:37:53.349921] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.151 [2024-06-10 11:37:53.349928] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.151 [2024-06-10 11:37:53.353157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.151 [2024-06-10 11:37:53.362497] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.151 [2024-06-10 11:37:53.363041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.151 [2024-06-10 11:37:53.363076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.151 [2024-06-10 11:37:53.363086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.151 [2024-06-10 11:37:53.363304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.151 [2024-06-10 11:37:53.363507] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.151 [2024-06-10 11:37:53.363514] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.151 [2024-06-10 11:37:53.363521] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.151 [2024-06-10 11:37:53.366757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.411 [2024-06-10 11:37:53.376100] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.411 [2024-06-10 11:37:53.376673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.411 [2024-06-10 11:37:53.376690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.411 [2024-06-10 11:37:53.376698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.411 [2024-06-10 11:37:53.376902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.411 [2024-06-10 11:37:53.377102] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.377110] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.377116] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.380346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.389685] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.390316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.390350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.390360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.390578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.390781] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.390789] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.390796] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.394039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.403189] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.403790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.403830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.403842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.404063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.404265] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.404273] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.404279] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.407511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.416655] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.417304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.417338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.417348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.417566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.417769] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.417777] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.417784] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.421024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.430166] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.430725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.430759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.430769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.430999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.431203] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.431211] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.431217] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.434459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.443799] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.444404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.444438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.444448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.444665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.444876] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.444885] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.444892] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.412 [2024-06-10 11:37:53.448122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.412 [2024-06-10 11:37:53.457272] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.412 [2024-06-10 11:37:53.457848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.412 [2024-06-10 11:37:53.457882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.412 [2024-06-10 11:37:53.457894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.412 [2024-06-10 11:37:53.458113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.412 [2024-06-10 11:37:53.458315] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.412 [2024-06-10 11:37:53.458324] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.412 [2024-06-10 11:37:53.458330] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.461571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.470913] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.471517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.471551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.471561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.471779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.471988] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.471997] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.472008] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.475240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.484389] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.484977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.484994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.485002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.485202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.485401] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.485409] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.485415] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.488643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.497979] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.498503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.498517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.498524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.498724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.498928] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.498937] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.498943] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.502170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.511503] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.512108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.512142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.512152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.512369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.512572] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.512580] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.512587] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.515829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.524973] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.525621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.525656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.525665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.525891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.526094] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.526102] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.526109] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.529341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.538498] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.539134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.539168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.413 [2024-06-10 11:37:53.539178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.413 [2024-06-10 11:37:53.539396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.413 [2024-06-10 11:37:53.539599] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.413 [2024-06-10 11:37:53.539607] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.413 [2024-06-10 11:37:53.539614] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.413 [2024-06-10 11:37:53.542857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.413 [2024-06-10 11:37:53.552010] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.413 [2024-06-10 11:37:53.552661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.413 [2024-06-10 11:37:53.552695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.552705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.552930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.553134] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.553142] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.553149] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.556380] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.565541] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.565995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.566029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.566040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.566259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.566469] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.566477] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.566483] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.569721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.579064] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.579690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.579725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.579735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.579960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.580164] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.580172] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.580178] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.583413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.592565] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.593217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.593251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.593261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.593479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.593681] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.593689] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.593696] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.596937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.606085] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.606712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.606747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.606757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.606983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.607187] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.607195] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.607202] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.610439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.619582] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.620235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.620269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.620279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.620497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.620699] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.414 [2024-06-10 11:37:53.620707] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.414 [2024-06-10 11:37:53.620715] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.414 [2024-06-10 11:37:53.623955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.414 [2024-06-10 11:37:53.633118] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.414 [2024-06-10 11:37:53.633729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.414 [2024-06-10 11:37:53.633764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.414 [2024-06-10 11:37:53.633774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.414 [2024-06-10 11:37:53.634000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.414 [2024-06-10 11:37:53.634203] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.415 [2024-06-10 11:37:53.634211] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.415 [2024-06-10 11:37:53.634218] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.637450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.646604] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.647125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.647159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.647169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.647388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.647591] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.647599] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.647606] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.650849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.660189] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.660815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.660856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.660870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.661088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.661291] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.661299] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.661305] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.664541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.673688] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.674347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.674381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.674391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.674608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.674811] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.674820] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.674836] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.678068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.687219] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.687873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.687908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.687919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.688137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.688340] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.688348] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.688355] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.691595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.700747] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.701274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.701291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.701299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.701499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.701698] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.701709] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.701716] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.704949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.714280] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.714788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.714803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.714810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.715014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.715214] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.715221] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.715227] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.718452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.727789] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.728300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.728314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.728321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.728521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.728720] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.728727] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.728733] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.731963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.741307] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.741812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.741830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.741837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.742037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.742236] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.742242] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.742249] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.745476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.754818] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.755454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.755488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.755498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.755715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.755927] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.755936] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.755942] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.759175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.768327] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.768888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.768906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.768913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.769113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.769312] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.769320] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.769326] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.772554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.781893] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.782476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.782511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.782521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.782738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.782949] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.782958] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.782964] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.786198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.795345] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.795916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.795950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.795960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.796182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.796385] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.796393] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.796400] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.799639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.808980] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.809586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-06-10 11:37:53.809620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.677 [2024-06-10 11:37:53.809630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.677 [2024-06-10 11:37:53.809856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.677 [2024-06-10 11:37:53.810060] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.677 [2024-06-10 11:37:53.810068] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.677 [2024-06-10 11:37:53.810075] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.677 [2024-06-10 11:37:53.813308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.677 [2024-06-10 11:37:53.822457] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.677 [2024-06-10 11:37:53.823113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.823148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.823158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.823376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.823578] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.823586] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.823593] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.826833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 [2024-06-10 11:37:53.835991] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.678 [2024-06-10 11:37:53.836617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.836651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.836661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.836887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.837090] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.837099] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.837109] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.840340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 [2024-06-10 11:37:53.849482] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.678 [2024-06-10 11:37:53.850101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.850136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.850146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.850365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.850567] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.850576] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.850583] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.853826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 [2024-06-10 11:37:53.862978] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.678 [2024-06-10 11:37:53.863605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.863639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.863648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.863875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.864078] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.864086] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.864093] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.867326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 [2024-06-10 11:37:53.876477] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.678 [2024-06-10 11:37:53.877126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.877160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.877170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.877388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.877591] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.877599] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.877606] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.881021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1726794 Killed "${NVMF_APP[@]}" "$@" 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.678 [2024-06-10 11:37:53.889988] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.678 [2024-06-10 11:37:53.890510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-06-10 11:37:53.890528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.678 [2024-06-10 11:37:53.890535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.678 [2024-06-10 11:37:53.890736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.678 [2024-06-10 11:37:53.890940] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.678 [2024-06-10 11:37:53.890949] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.678 [2024-06-10 11:37:53.890956] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.678 [2024-06-10 11:37:53.894184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1728285 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1728285 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1728285 ']' 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:56.678 11:37:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.937 [2024-06-10 11:37:53.903528] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.937 [2024-06-10 11:37:53.904173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.937 [2024-06-10 11:37:53.904207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.937 [2024-06-10 11:37:53.904217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.937 [2024-06-10 11:37:53.904435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.937 [2024-06-10 11:37:53.904638] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.937 [2024-06-10 11:37:53.904647] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.904654] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.907893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.917045] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.917616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.917637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.917644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.917851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.918051] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.918059] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.918065] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.921292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.930633] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.931264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.931299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.931309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.931527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.931730] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.931738] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.931745] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.934997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.944150] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.944769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.944803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.944814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.945040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.945105] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:30:56.938 [2024-06-10 11:37:53.945145] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.938 [2024-06-10 11:37:53.945244] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.945252] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.945259] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.948492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.957644] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.958276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.958312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.958327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.958545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.958749] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.958758] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.958765] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.962006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.971163] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.971712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.971747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.971758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.971985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.972190] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.972201] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.972208] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.975439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.938 [2024-06-10 11:37:53.984786] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.985431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.985451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.985458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.985659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.985865] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.985875] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.985883] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:53.989114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:53.998262] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:53.998917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:53.998952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:53.998964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:53.999183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:53.999387] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:53.999400] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:53.999407] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.002646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.011800] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.012119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:56.938 [2024-06-10 11:37:54.012373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.012392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.012399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.012600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.012800] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.012808] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.012814] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.016049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.025397] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.025938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.025977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.025990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.026211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.026416] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.026427] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.026434] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.029673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.039040] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.039595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.039631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.039642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.039870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.040074] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.040084] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.040092] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.043329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.052673] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.053318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.053357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.053367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.053587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.053791] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.053800] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.053808] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.057050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.066197] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.066775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.066792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.066800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.067005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.067206] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.067214] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.067221] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.070449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.072945] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.938 [2024-06-10 11:37:54.072972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.938 [2024-06-10 11:37:54.072978] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.938 [2024-06-10 11:37:54.072985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.938 [2024-06-10 11:37:54.072990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.938 [2024-06-10 11:37:54.073130] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.938 [2024-06-10 11:37:54.073331] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.938 [2024-06-10 11:37:54.073332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.938 [2024-06-10 11:37:54.079794] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.080331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.080370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.080382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.080606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.080815] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.080832] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.080840] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.084079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.093436] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.094136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.094174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.094186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.094407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.938 [2024-06-10 11:37:54.094611] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.938 [2024-06-10 11:37:54.094619] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.938 [2024-06-10 11:37:54.094626] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.938 [2024-06-10 11:37:54.097873] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.938 [2024-06-10 11:37:54.107026] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.938 [2024-06-10 11:37:54.107591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.938 [2024-06-10 11:37:54.107609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.938 [2024-06-10 11:37:54.107617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.938 [2024-06-10 11:37:54.107818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.939 [2024-06-10 11:37:54.108025] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.939 [2024-06-10 11:37:54.108034] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.939 [2024-06-10 11:37:54.108041] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.939 [2024-06-10 11:37:54.111269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.939 [2024-06-10 11:37:54.120612] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.939 [2024-06-10 11:37:54.121120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.939 [2024-06-10 11:37:54.121157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.939 [2024-06-10 11:37:54.121168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.939 [2024-06-10 11:37:54.121388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.939 [2024-06-10 11:37:54.121591] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.939 [2024-06-10 11:37:54.121599] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.939 [2024-06-10 11:37:54.121606] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.939 [2024-06-10 11:37:54.124856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.939 [2024-06-10 11:37:54.134214] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.939 [2024-06-10 11:37:54.134856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.939 [2024-06-10 11:37:54.134891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.939 [2024-06-10 11:37:54.134902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.939 [2024-06-10 11:37:54.135123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.939 [2024-06-10 11:37:54.135326] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.939 [2024-06-10 11:37:54.135343] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.939 [2024-06-10 11:37:54.135350] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.939 [2024-06-10 11:37:54.138591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.939 [2024-06-10 11:37:54.147740] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.939 [2024-06-10 11:37:54.148266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.939 [2024-06-10 11:37:54.148301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:56.939 [2024-06-10 11:37:54.148311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:56.939 [2024-06-10 11:37:54.148529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:56.939 [2024-06-10 11:37:54.148732] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.939 [2024-06-10 11:37:54.148740] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.939 [2024-06-10 11:37:54.148747] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.939 [2024-06-10 11:37:54.151986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.198 [2024-06-10 11:37:54.161328] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.198 [2024-06-10 11:37:54.161761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.198 [2024-06-10 11:37:54.161777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.198 [2024-06-10 11:37:54.161785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.198 [2024-06-10 11:37:54.161990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.198 [2024-06-10 11:37:54.162191] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.198 [2024-06-10 11:37:54.162198] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.198 [2024-06-10 11:37:54.162205] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.198 [2024-06-10 11:37:54.165435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.198 [2024-06-10 11:37:54.174963] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.198 [2024-06-10 11:37:54.175481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.175496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.175508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.175709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.175992] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.176001] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.176008] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.179237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.188567] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.189193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.189228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.189238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.189457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.189660] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.189668] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.189675] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.192912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.202063] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.202683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.202718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.202727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.202952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.203156] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.203164] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.203171] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.206402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.215552] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.216198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.216233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.216243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.216461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.216664] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.216676] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.216683] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.219921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.229072] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.229692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.229727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.229737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.229962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.230166] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.230174] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.230180] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.233421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.242578] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.243048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.243068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.243076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.243277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.243477] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.243485] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.243491] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.246720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.256066] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.256629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.256644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.256651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.256856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.257056] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.257063] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.257070] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.260296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.269636] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.270311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.270346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.270356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.270574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.270777] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.270785] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.270792] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.274031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.283181] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.283855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.283889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.283901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.199 [2024-06-10 11:37:54.284121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.199 [2024-06-10 11:37:54.284323] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.199 [2024-06-10 11:37:54.284332] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.199 [2024-06-10 11:37:54.284339] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.199 [2024-06-10 11:37:54.287579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.199 [2024-06-10 11:37:54.296729] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.199 [2024-06-10 11:37:54.297361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.199 [2024-06-10 11:37:54.297395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.199 [2024-06-10 11:37:54.297406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.297623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.297833] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.297842] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.297849] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.301081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.310232] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.310743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.310777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.310788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.311018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.311222] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.311230] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.311236] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.314471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.323812] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.324477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.324512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.324523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.324742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.324952] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.324962] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.324969] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.328199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.337351] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.338025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.338061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.338071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.338289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.338492] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.338501] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.338508] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.341747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.350904] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.351388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.351422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.351433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.351654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.351863] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.351872] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.351883] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.355115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.364459] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.365116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.365151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.365161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.365379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.365582] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.365589] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.365596] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.368839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.377997] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.378650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.378684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.378694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.378918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.379122] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.379130] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.379137] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.382371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.391523] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.392116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.392134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.392141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.392341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.392540] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.200 [2024-06-10 11:37:54.392548] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.200 [2024-06-10 11:37:54.392554] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.200 [2024-06-10 11:37:54.395783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.200 [2024-06-10 11:37:54.405128] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.200 [2024-06-10 11:37:54.405651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.200 [2024-06-10 11:37:54.405669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.200 [2024-06-10 11:37:54.405676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.200 [2024-06-10 11:37:54.405880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.200 [2024-06-10 11:37:54.406080] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.201 [2024-06-10 11:37:54.406088] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.201 [2024-06-10 11:37:54.406094] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.201 [2024-06-10 11:37:54.409323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.201 [2024-06-10 11:37:54.418666] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.201 [2024-06-10 11:37:54.419277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.201 [2024-06-10 11:37:54.419313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.201 [2024-06-10 11:37:54.419323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.201 [2024-06-10 11:37:54.419540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.201 [2024-06-10 11:37:54.419743] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.201 [2024-06-10 11:37:54.419751] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.201 [2024-06-10 11:37:54.419758] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.462 [2024-06-10 11:37:54.422995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.462 [2024-06-10 11:37:54.432151] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.462 [2024-06-10 11:37:54.432706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.462 [2024-06-10 11:37:54.432741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.462 [2024-06-10 11:37:54.432751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.462 [2024-06-10 11:37:54.432984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.462 [2024-06-10 11:37:54.433188] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.462 [2024-06-10 11:37:54.433197] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.462 [2024-06-10 11:37:54.433204] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.462 [2024-06-10 11:37:54.436438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.462 [2024-06-10 11:37:54.445779] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.462 [2024-06-10 11:37:54.446286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.462 [2024-06-10 11:37:54.446320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.462 [2024-06-10 11:37:54.446331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.462 [2024-06-10 11:37:54.446549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.462 [2024-06-10 11:37:54.446755] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.462 [2024-06-10 11:37:54.446763] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.462 [2024-06-10 11:37:54.446770] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.462 [2024-06-10 11:37:54.450009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.462 [2024-06-10 11:37:54.459357] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.462 [2024-06-10 11:37:54.460023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.462 [2024-06-10 11:37:54.460058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.462 [2024-06-10 11:37:54.460067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.462 [2024-06-10 11:37:54.460285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.462 [2024-06-10 11:37:54.460488] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.462 [2024-06-10 11:37:54.460496] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.462 [2024-06-10 11:37:54.460503] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.462 [2024-06-10 11:37:54.463740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.462 [2024-06-10 11:37:54.472897] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.462 [2024-06-10 11:37:54.473553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.462 [2024-06-10 11:37:54.473587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.462 [2024-06-10 11:37:54.473597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.462 [2024-06-10 11:37:54.473815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.462 [2024-06-10 11:37:54.474024] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.462 [2024-06-10 11:37:54.474033] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.462 [2024-06-10 11:37:54.474040] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.462 [2024-06-10 11:37:54.477274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.462 [2024-06-10 11:37:54.486426] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.462 [2024-06-10 11:37:54.487132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.462 [2024-06-10 11:37:54.487167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.462 [2024-06-10 11:37:54.487178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.462 [2024-06-10 11:37:54.487397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.462 [2024-06-10 11:37:54.487600] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.487608] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.487614] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.490858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.500013] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.500299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.500316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.500323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.500523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.500722] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.500730] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.500736] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.503969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.513499] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.514162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.514197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.514207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.514424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.514627] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.514635] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.514642] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.517885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.527039] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.527575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.527592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.527599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.527799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.528004] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.528012] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.528018] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.531245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.540590] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.541210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.541245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.541259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.541476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.541679] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.541688] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.541695] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.544936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.554088] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.554503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.554519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.554527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.554727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.554932] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.554940] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.554947] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.558176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.567706] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.568358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.568392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.568403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.568620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.568831] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.568840] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.568847] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.572081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.581231] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.581523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.581541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.581548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.581749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.581954] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.581972] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.581979] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.585207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.594738] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.463 [2024-06-10 11:37:54.595256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.463 [2024-06-10 11:37:54.595271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.463 [2024-06-10 11:37:54.595278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.463 [2024-06-10 11:37:54.595478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.463 [2024-06-10 11:37:54.595677] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.463 [2024-06-10 11:37:54.595685] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.463 [2024-06-10 11:37:54.595691] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.463 [2024-06-10 11:37:54.598925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.463 [2024-06-10 11:37:54.608264] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.608829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.608843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.608850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.609049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.609248] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.609256] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.609262] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.612489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.464 [2024-06-10 11:37:54.621835] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.622352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.622366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.622373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.622571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.622771] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.622778] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.622784] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.626014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.464 [2024-06-10 11:37:54.635360] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.635918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.635932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.635939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.636139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.636338] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.636347] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.636354] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.639581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.464 [2024-06-10 11:37:54.648929] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.649453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.649466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.649473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.649672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.649875] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.649883] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.649890] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.653118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.464 [2024-06-10 11:37:54.662458] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.662962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.662977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.662984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.663183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.663383] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.663390] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.663397] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.666642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.464 [2024-06-10 11:37:54.675995] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.464 [2024-06-10 11:37:54.676361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.464 [2024-06-10 11:37:54.676375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.464 [2024-06-10 11:37:54.676382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.464 [2024-06-10 11:37:54.676585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.464 [2024-06-10 11:37:54.676783] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.464 [2024-06-10 11:37:54.676792] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.464 [2024-06-10 11:37:54.676798] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.464 [2024-06-10 11:37:54.680030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.726 [2024-06-10 11:37:54.689563] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.726 [2024-06-10 11:37:54.690196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.726 [2024-06-10 11:37:54.690232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.726 [2024-06-10 11:37:54.690242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.726 [2024-06-10 11:37:54.690460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.726 [2024-06-10 11:37:54.690663] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.726 [2024-06-10 11:37:54.690672] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.726 [2024-06-10 11:37:54.690679] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.726 [2024-06-10 11:37:54.693919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.726 [2024-06-10 11:37:54.703080] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.726 [2024-06-10 11:37:54.703689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.726 [2024-06-10 11:37:54.703723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.726 [2024-06-10 11:37:54.703733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.703958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.704162] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.704171] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.704178] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.707413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.716572] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.717209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.717244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.717254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.717472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.717675] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.717684] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.717697] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.720939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.730096] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.730649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.730667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.730675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.730880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.731081] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.731089] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.731095] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.734335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.743681] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.744321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.744356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.744367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.744585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.744789] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.744798] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.744806] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.748049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.757209] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.757669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.757704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.757714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.757939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.758142] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.758151] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.758158] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.761391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.770743] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.771383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.771422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.771432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.771650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.771861] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.771870] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.771877] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.727 [2024-06-10 11:37:54.775111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.727 [2024-06-10 11:37:54.784267] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.727 [2024-06-10 11:37:54.784793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.727 [2024-06-10 11:37:54.784833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.727 [2024-06-10 11:37:54.784845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.727 [2024-06-10 11:37:54.785066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.727 [2024-06-10 11:37:54.785269] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.727 [2024-06-10 11:37:54.785277] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.727 [2024-06-10 11:37:54.785283] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.788518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.728 [2024-06-10 11:37:54.797884] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.798519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.728 [2024-06-10 11:37:54.798554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.728 [2024-06-10 11:37:54.798565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.728 [2024-06-10 11:37:54.798783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.728 [2024-06-10 11:37:54.798993] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.728 [2024-06-10 11:37:54.799003] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.728 [2024-06-10 11:37:54.799009] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.802242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 [2024-06-10 11:37:54.811394] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.811810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.728 [2024-06-10 11:37:54.811839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.728 [2024-06-10 11:37:54.811847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.728 [2024-06-10 11:37:54.812048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.728 [2024-06-10 11:37:54.812248] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.728 [2024-06-10 11:37:54.812256] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.728 [2024-06-10 11:37:54.812262] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.815490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 [2024-06-10 11:37:54.825029] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.825553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.728 [2024-06-10 11:37:54.825568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.728 [2024-06-10 11:37:54.825575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.728 [2024-06-10 11:37:54.825774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.728 [2024-06-10 11:37:54.825979] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.728 [2024-06-10 11:37:54.825987] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.728 [2024-06-10 11:37:54.825993] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.829224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.728 [2024-06-10 11:37:54.838579] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.839086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.728 [2024-06-10 11:37:54.839121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.728 [2024-06-10 11:37:54.839130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.728 [2024-06-10 11:37:54.839348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.728 [2024-06-10 11:37:54.839551] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.728 [2024-06-10 11:37:54.839559] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.728 [2024-06-10 11:37:54.839566] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.842806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 [2024-06-10 11:37:54.842932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.728 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.728 [2024-06-10 11:37:54.852158] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.852686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.728 [2024-06-10 11:37:54.852703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.728 [2024-06-10 11:37:54.852710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.728 [2024-06-10 11:37:54.852916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.728 [2024-06-10 11:37:54.853116] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.728 [2024-06-10 11:37:54.853123] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.728 [2024-06-10 11:37:54.853129] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.728 [2024-06-10 11:37:54.856358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.728 [2024-06-10 11:37:54.865699] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.728 [2024-06-10 11:37:54.866222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.729 [2024-06-10 11:37:54.866237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.729 [2024-06-10 11:37:54.866244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.729 [2024-06-10 11:37:54.866443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.729 [2024-06-10 11:37:54.866643] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.729 [2024-06-10 11:37:54.866650] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.729 [2024-06-10 11:37:54.866656] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.729 [2024-06-10 11:37:54.869887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.729 [2024-06-10 11:37:54.879427] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.729 [2024-06-10 11:37:54.880088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.729 [2024-06-10 11:37:54.880124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.729 [2024-06-10 11:37:54.880134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.729 [2024-06-10 11:37:54.880354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.729 [2024-06-10 11:37:54.880557] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.729 [2024-06-10 11:37:54.880566] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.729 [2024-06-10 11:37:54.880573] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.729 Malloc0 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.729 [2024-06-10 11:37:54.883812] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.729 [2024-06-10 11:37:54.892977] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.729 [2024-06-10 11:37:54.893497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.729 [2024-06-10 11:37:54.893531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.729 [2024-06-10 11:37:54.893541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:57.729 [2024-06-10 11:37:54.893760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.729 [2024-06-10 11:37:54.893971] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.729 [2024-06-10 11:37:54.893981] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.729 [2024-06-10 11:37:54.893988] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.729 [2024-06-10 11:37:54.897222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.729 [2024-06-10 11:37:54.906564] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.729 [2024-06-10 11:37:54.907158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.729 [2024-06-10 11:37:54.907193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3300 with addr=10.0.0.2, port=4420 00:30:57.729 [2024-06-10 11:37:54.907203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3300 is same with the state(5) to be set 00:30:57.729 [2024-06-10 11:37:54.907421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3300 (9): Bad file descriptor 00:30:57.729 [2024-06-10 11:37:54.907625] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.729 [2024-06-10 11:37:54.907633] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.729 [2024-06-10 11:37:54.907640] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.729 [2024-06-10 11:37:54.910882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.729 [2024-06-10 11:37:54.912622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.729 11:37:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1727361 00:30:57.729 [2024-06-10 11:37:54.920037] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.990 [2024-06-10 11:37:54.997873] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:07.986 00:31:07.986 Latency(us) 00:31:07.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.986 Verification LBA range: start 0x0 length 0x4000 00:31:07.986 Nvme1n1 : 15.01 7181.30 28.05 10995.37 0.00 7019.24 715.22 17543.48 00:31:07.986 =================================================================================================================== 00:31:07.986 Total : 7181.30 28.05 10995.37 0.00 7019.24 715.22 17543.48 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:07.986 rmmod nvme_tcp 00:31:07.986 rmmod nvme_fabrics 00:31:07.986 rmmod nvme_keyring 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1728285 ']' 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1728285 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1728285 ']' 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1728285 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1728285 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:07.986 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1728285' 00:31:07.986 killing process with pid 1728285 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1728285 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1728285 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.987 11:38:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.931 11:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:08.931 00:31:08.931 real 0m28.680s 00:31:08.931 user 1m3.795s 00:31:08.931 sys 0m7.695s 00:31:08.931 11:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:08.931 11:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.931 ************************************ 00:31:08.931 END TEST nvmf_bdevperf 00:31:08.931 ************************************ 00:31:08.931 11:38:05 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:08.931 11:38:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:08.931 11:38:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:08.931 11:38:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.931 ************************************ 00:31:08.931 START TEST nvmf_target_disconnect 00:31:08.931 ************************************ 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:08.931 * Looking for test storage... 00:31:08.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.931 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.932 11:38:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.193 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.193 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.193 11:38:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.193 11:38:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:17.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:17.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.337 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:17.338 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:17.338 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.338 11:38:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:17.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:31:17.338 00:31:17.338 --- 10.0.0.2 ping statistics --- 00:31:17.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.338 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:31:17.338 00:31:17.338 --- 10.0.0.1 ping statistics --- 00:31:17.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.338 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.338 ************************************ 00:31:17.338 START TEST nvmf_target_disconnect_tc1 00:31:17.338 ************************************ 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.338 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.338 [2024-06-10 11:38:14.383338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.338 [2024-06-10 11:38:14.383404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5292b0 with addr=10.0.0.2, port=4420 00:31:17.338 [2024-06-10 11:38:14.383430] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:17.338 [2024-06-10 11:38:14.383440] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:17.338 [2024-06-10 11:38:14.383447] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:17.338 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:17.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:17.338 Initializing NVMe Controllers 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:17.338 00:31:17.338 real 0m0.129s 00:31:17.338 user 0m0.050s 00:31:17.338 sys 0m0.078s 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:17.338 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:17.338 ************************************ 00:31:17.338 END TEST nvmf_target_disconnect_tc1 00:31:17.338 ************************************ 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.339 ************************************ 00:31:17.339 START TEST nvmf_target_disconnect_tc2 00:31:17.339 ************************************ 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1734337 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1734337 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1734337 ']' 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:17.339 11:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:17.339 [2024-06-10 11:38:14.528465] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:31:17.339 [2024-06-10 11:38:14.528522] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.600 [2024-06-10 11:38:14.621704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.600 [2024-06-10 11:38:14.713745] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.600 [2024-06-10 11:38:14.713804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.600 [2024-06-10 11:38:14.713812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.600 [2024-06-10 11:38:14.713819] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.600 [2024-06-10 11:38:14.713836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.600 [2024-06-10 11:38:14.713992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:31:17.600 [2024-06-10 11:38:14.714261] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:31:17.600 [2024-06-10 11:38:14.714416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:31:17.600 [2024-06-10 11:38:14.714418] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:18.172 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:18.172 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:18.172 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.172 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:18.172 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.432 Malloc0 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.432 [2024-06-10 11:38:15.458097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.432 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.433 [2024-06-10 11:38:15.486450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1734430 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:18.433 11:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.433 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.358 11:38:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1734337 00:31:20.358 11:38:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Write completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 Read completed with error (sct=0, sc=8) 00:31:20.358 starting I/O failed 00:31:20.358 [2024-06-10 11:38:17.513983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.358 [2024-06-10 11:38:17.514390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.514408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.514775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.514783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.515079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.515111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.515342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.515351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.515660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.515668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.516109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.516139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.516434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.516444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.516768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.516776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.517144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.517156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.517462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.517471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.517789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.517797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.518156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.518165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.518510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.518520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.518627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.518633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.518833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.518842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.519232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.519241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.519423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.519431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.519722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.519731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.519955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.519964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.358 [2024-06-10 11:38:17.520219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.358 [2024-06-10 11:38:17.520227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.358 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.520412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.520421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.520729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.520737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.521069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.521078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.521419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.521429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.521763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.521771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.521955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.521963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.522302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.522310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.522638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.522646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.522830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.522838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.523185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.523193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.523524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.523533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.523875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.523883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.524215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.524223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.524547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.524556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.524784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.524792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.525094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.525102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.525433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.525443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.525779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.525788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.526093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.526102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.526408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.526416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.526734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.526742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.527046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.527053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.527387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.527396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.527727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.527736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.528085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.528095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.528380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.528389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.528726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.528734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.529030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.529038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.529366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.529377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.529561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.529571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.529901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.529909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.530882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.530902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.359 [2024-06-10 11:38:17.531220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.359 [2024-06-10 11:38:17.531230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.359 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.531560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.531570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.531868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.531877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.532207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.532216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.532522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.532531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.532755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.532763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.533084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.533093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.533392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.533402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.533590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.533600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.534369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.534386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.534703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.534712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.535342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.535358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.535647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.535657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.536007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.536016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.536345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.536354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.536680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.536688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.536995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.537004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.537304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.537313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.537592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.537600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.537896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.537905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.538215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.538223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.538551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.538560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.538858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.538867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.539174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.539184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.539522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.539531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.539736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.539744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.540052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.540061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.540761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.540776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.541079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.541089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.541718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.541734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.541929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.541939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.542256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.542265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.542576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.542585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.360 [2024-06-10 11:38:17.542881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.360 [2024-06-10 11:38:17.542889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.360 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.543196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.543204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.543576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.543584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.543867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.543877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.544189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.544197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.544510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.544518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.544829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.544837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.545147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.545156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.545462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.545470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.545771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.545779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.545991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.546000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.546336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.546344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.546674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.546683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.547428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.547443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.547746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.547755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.548071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.548080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.548414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.548423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.548797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.548806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.549094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.549104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.549373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.549383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.549687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.549696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.549990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.549999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.550306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.550315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.550607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.550615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.550949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.550958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.551239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.551247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.551618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.551626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.551964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.551973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.552265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.552276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.552590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.552606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.552868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.552885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.553231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.553239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.553567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.553575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.553926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.361 [2024-06-10 11:38:17.553935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.361 qpair failed and we were unable to recover it. 00:31:20.361 [2024-06-10 11:38:17.554291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.554299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.554643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.554652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.554980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.554988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.555307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.555315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.555647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.555655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.555971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.555979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.556231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.556240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.556549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.556558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.556884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.556892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.557216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.557225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.557521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.557530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.557860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.557869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.558184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.558192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.558492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.558501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.558845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.558854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.559074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.559083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.559155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.559163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.559394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.559401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.559712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.559721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.559881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.559890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.560196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.560204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.560620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.560627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.560936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.560945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.561288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.561296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.561621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.561629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.561939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.561947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.562256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.562264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.562556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.562565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.562869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.562877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.563197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.563205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.563528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.563536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.563889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.563898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.362 [2024-06-10 11:38:17.564243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.362 [2024-06-10 11:38:17.564252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.362 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.564450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.564458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.564777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.564785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.565116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.565126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.565319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.565330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.565625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.565634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.565850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.565859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.566283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.566292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.566619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.566627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.363 [2024-06-10 11:38:17.566945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.363 [2024-06-10 11:38:17.566954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.363 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.567270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.567278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.567574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.567583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.567900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.567908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.568139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.568148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.568455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.568464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.568779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.568788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.569134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.569143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.569286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.569294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.569636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.569645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.569967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.569974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.570314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.570323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.570611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.570619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.570831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.570839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.571023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.571030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.571388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.571396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.571571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.571579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.571701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.571708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.571952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.571961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.572287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.572297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.572536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.572545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.572886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.572895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.364 [2024-06-10 11:38:17.573198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.364 [2024-06-10 11:38:17.573207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.364 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.573406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.573416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.573732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.573741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.574029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.574037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.574377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.574385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.574705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.574713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.575012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.575021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.575352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.575360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.575582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.575589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.575882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.575890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.576206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.576215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-10 11:38:17.576582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-10 11:38:17.576590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.576905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.576913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.577220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.577231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.577566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.577574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.577756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.577764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.578088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.578096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.578388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.578395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.578728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.578736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.579014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.579022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.579340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.579348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.579731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.579739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.579985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.579993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.580193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.580201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.580507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.580514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.580723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.580731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.581071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.581081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.581430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.581438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.581666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.581673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.581888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.581896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.582251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.582260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.582570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.582579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.582934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.582942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.583161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.583169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.583486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.583494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.583719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.583727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.584016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.584024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.584333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.584342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.584534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.584542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.584889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.584899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.585232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.585240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.585496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.585504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.585804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.585812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.586001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.586009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-10 11:38:17.586322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-10 11:38:17.586329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.586526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.586534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.586880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.586888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.587208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.587216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.587556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.587564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.587836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.587844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.587940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.587947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.588252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.588261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.588603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.588611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.588912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.588922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.589216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.589224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.589509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.589517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.589839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.589848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.590141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.590149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.590449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.590458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.590769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.590778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.591002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.591010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.591284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.591292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.591602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.591610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.591901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.591909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.592223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.592232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.592607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.592615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.592833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.592841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.593121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.593129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.593304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.593312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.593613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.593621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.593930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.593939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.594258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.594266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.594484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.594492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.594798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.594807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.595122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.595131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.595431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.595440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.595762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-10 11:38:17.595770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-10 11:38:17.596096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.596105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.596250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.596259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.596556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.596564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.596854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.596863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.597054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.597063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.597399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.597407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.597622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.597630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.597845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.597854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.598241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.598249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.598548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.598557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.598898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.598906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.599298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.599307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.599497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.599505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.599797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.599805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.600051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.600060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.600235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.600243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.600447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.600456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.600762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.600770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.600927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.600936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.601277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.601286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.601593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.601602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.601903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.601912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.602237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.602246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.602455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.602463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.602773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.602782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.603086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.603096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.603432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.603440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.603745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.603753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.603971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.603978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.604248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.604256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.604406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.604413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.604648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.604656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.604813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-10 11:38:17.604824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-10 11:38:17.605148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.605156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.605492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.605500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.605810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.605818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.606126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.606134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.606482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.606491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.606831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.606839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.607033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.607041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.607353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.607361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.607628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.607635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.607962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.607971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.608322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.608330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.608611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.608620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.608945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.608953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.609301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.609309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.609531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.609539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.609831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.609839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.610183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.610191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.610523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.610533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.610751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.610760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.610961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.610970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.611316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.611325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.611631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.611640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.611945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.611954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.612268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.612278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.612584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.612592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.612923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.612932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.613242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.613250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.613435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.613442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.613758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.613767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.614070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.614078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.614254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.614262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.614580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.614589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-10 11:38:17.614889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-10 11:38:17.614897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.615194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.615202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.615565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.615573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.615873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.615882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.616117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.616125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.616311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.616319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.616638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.616645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.616963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.616972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.617205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.617212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.617431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.617439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.617749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.617757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.618059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.618067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.618407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.618416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.618729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.618736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.619039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.619048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.619390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.619398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.619745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.619753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.620097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.620105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.620408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.620416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.620726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.620733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.621044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.621053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.621394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.621403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.621715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.621724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.622060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.622069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.622410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.622419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.622690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.622699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.623004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.623012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-10 11:38:17.623196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-10 11:38:17.623204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.623540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.623549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.623875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.623883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.624114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.624122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.624438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.624448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.624792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.624801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.624989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.624999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.625233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.625241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.625576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.625584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.625762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.625771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.626061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.626071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.626404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.626412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.626752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.626760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.627066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.627074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.627416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.627424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.627587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.627595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.627875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.627884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.628096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.628104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.628336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.628344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.628618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.628628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.628848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.628856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.629171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.629180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.629589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.629596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.629888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.629897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.630217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.630225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.630437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.630445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.630748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.630756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.631142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.631150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.631469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.631478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.631663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.631672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.631990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.631998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.632327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.632337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-10 11:38:17.632681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-10 11:38:17.632688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.632883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.633246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.633254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.633329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.633592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.633601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.633946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.633954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.634322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.634331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.634558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.634566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.634797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.634805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.635133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.635141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.635436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.635445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.635792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.635799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.636112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.636123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.636419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.636427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.636763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.636771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.637134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.637143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.637326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.637334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.637530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.637540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.637773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.637781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.638145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.638153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.638342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.638349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.638676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.638684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.639014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.639023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.639205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.639213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.639563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.639571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.639884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.639892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.640200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.640208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.640554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.640562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.640861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.640871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.641110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.641118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.641311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.641319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.641658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.641666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.641834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.641842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-10 11:38:17.642168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-10 11:38:17.642176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.642392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.642400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.642717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.642725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.643055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.643065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.643376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.643384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.643732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.643740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.644094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.644103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.644405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.644414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.644718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.644726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.645039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.645048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.645395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.645403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.645730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.645738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.645935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.645943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.646270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.646278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.646562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.646571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.646909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.646917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.647222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.647230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.647463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.647471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.647858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.647867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.648223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.648233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.648535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.648544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.648851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.648859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.649102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.649110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.649401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.649708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.649716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.650009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.650017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.650330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.650338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.650632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.650640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.650982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.650991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.651302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.651310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.651622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.651630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.651955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.651963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.652240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.652248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-10 11:38:17.652424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-10 11:38:17.652433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.652674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.652681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.652956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.652964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.653290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.653299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.653630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.653638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.653919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.653927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.654248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.654256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.654551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.654559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.654865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.654873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.655170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.655178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.655477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.655485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.655858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.655866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.656154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.656163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.656368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.656377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.656700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.656708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.656936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.656944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.657193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.657200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.657472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.657480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.657651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.657659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.657887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.657896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.658191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.658199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.658395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.658402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.658698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.658706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.658879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.658887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.659184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.659192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.659422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.659430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.659756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.659765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.659838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.659845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.660152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.660161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.660458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.660466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.660775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.660784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.661083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.661092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.661388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.661396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-10 11:38:17.661731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-10 11:38:17.661739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.661928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.661935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.662258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.662267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.662456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.662464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.662792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.662800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.663133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.663141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.663474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.663483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.663834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.663842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.664164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.664173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.664395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.664403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.664726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.664735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.664825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.664833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.665125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.665134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.665356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.665364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.665488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.665496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.665601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.665608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.665917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.665925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.666247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.666256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.666433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.666442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.666779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.666788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.667112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.667121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.667410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.667420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.667663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.667671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.667989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.667998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.668348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.668356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.668688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.668696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.668905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.668913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.669209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.669217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.669544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.669552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.669834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.669842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.670233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.670241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.670361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.670368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.670683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.670691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-10 11:38:17.670869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-10 11:38:17.670878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.681 [2024-06-10 11:38:17.671187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.681 [2024-06-10 11:38:17.671195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.681 qpair failed and we were unable to recover it. 00:31:20.681 [2024-06-10 11:38:17.671507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.681 [2024-06-10 11:38:17.671515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.681 qpair failed and we were unable to recover it. 00:31:20.681 [2024-06-10 11:38:17.671841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.681 [2024-06-10 11:38:17.671850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.681 qpair failed and we were unable to recover it. 00:31:20.681 [2024-06-10 11:38:17.672172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.681 [2024-06-10 11:38:17.672181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.681 qpair failed and we were unable to recover it. 00:31:20.681 [2024-06-10 11:38:17.672403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.681 [2024-06-10 11:38:17.672411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.672761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.672770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.673138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.673146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.673481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.673489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.673779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.673788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.674073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.674081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.674419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.674427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.674730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.674739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.675060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.675068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.675300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.675308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.675644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.675652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.675959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.675968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.676267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.676275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.676597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.676605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.676797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.676805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.677145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.677153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.677461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.677470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.677720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.677728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.678006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.678014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.678331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.678339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.678677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.678686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.679007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.679015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.679248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.679256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.679595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.679603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.679897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.679906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.680226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.680234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.680530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.680539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.680844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.682 [2024-06-10 11:38:17.680852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.682 qpair failed and we were unable to recover it. 00:31:20.682 [2024-06-10 11:38:17.681136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.681145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.681331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.681339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.681692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.681700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.681974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.681983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.682296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.682304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.682628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.682636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.682849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.682857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.683149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.683158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.683374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.683382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.683752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.683760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.683884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.683892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.684183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.684191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.684440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.684448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.684766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.684773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.685096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.685106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.685372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.685380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.685708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.685716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.686125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.686133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.686318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.686326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.686652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.686660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.686975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.686983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.687260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.687268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.687597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.687605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.687895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.687904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.688256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.688263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.688569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.688578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.688786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.688794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.689072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.689080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.689304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.689311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.689628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.689637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.690028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.690036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.690366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.690374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.690558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.683 [2024-06-10 11:38:17.690566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.683 qpair failed and we were unable to recover it. 00:31:20.683 [2024-06-10 11:38:17.690871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.690879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.691186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.691194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.691532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.691541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.691885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.691893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.692214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.692222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.692530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.692539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.692751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.692760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.693072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.693081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.693310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.693318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.693512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.693520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.693831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.693839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.694032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.694040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.694324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.694332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.694644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.694653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.694957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.694966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.695281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.695289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.695512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.695519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.695801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.695808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.696120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.696129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.696357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.696365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.696685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.696693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.697010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.697019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.697331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.697339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.697665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.697673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.697990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.697999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.698313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.698321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.698648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.698656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.698920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.698928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.699240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.699248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.699578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.699586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.699943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.699952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.700145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.700153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.700451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.684 [2024-06-10 11:38:17.700459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.684 qpair failed and we were unable to recover it. 00:31:20.684 [2024-06-10 11:38:17.700833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.700842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.701050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.701058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.701377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.701385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.701751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.701759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.701994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.702002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.702309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.702317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.702486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.702494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.702787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.702795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.702946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.702956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.703325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.703333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.703648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.703656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.703978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.703988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.704330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.704338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.704650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.704659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.704926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.704934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.705218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.705227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.705558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.705566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.705794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.705801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.706116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.706125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.706451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.706459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.706771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.706779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.707088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.707096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.707416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.707424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.707746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.707755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.708077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.708085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.708416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.708424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.708603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.708611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.708850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.708860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.709209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.709216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.709544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.709552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.709859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.709868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.710237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.710245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.710468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.685 [2024-06-10 11:38:17.710476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.685 qpair failed and we were unable to recover it. 00:31:20.685 [2024-06-10 11:38:17.710787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.710795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.711175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.711184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.711503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.711511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.711851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.711860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.712188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.712196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.712368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.712376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.712552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.712560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.712868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.712876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.713197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.713206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.713525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.713534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.713854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.713863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.714102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.714110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.714434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.714442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.714636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.714644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.714951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.714959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.715271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.715281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.715612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.715620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.715948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.715957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.716266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.716275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.716599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.716607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.716957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.716966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.717211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.717219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.717538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.717546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.717816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.717829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.718154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.718162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.718345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.718353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.718640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.718648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.718953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.718963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.719272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.719281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.719615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.719623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.719809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.719817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.720135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.720144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.720454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.720462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.686 qpair failed and we were unable to recover it. 00:31:20.686 [2024-06-10 11:38:17.720631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.686 [2024-06-10 11:38:17.720639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.720852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.720861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.721003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.721011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.721327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.721335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.721563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.721571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.721864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.721872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.722208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.722217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.722514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.722522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.722850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.722858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.723131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.723139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.723462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.723470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.723799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.723807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.724132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.724142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.724355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.724363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.724697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.724705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.725051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.725061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.725365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.725373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.725698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.725706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.726008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.726016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.726333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.726341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.726672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.726681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.726998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.727007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.727188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.727198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.727501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.727509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.727842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.727852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.728178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.728186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.728495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.728503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.728805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.728815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.687 qpair failed and we were unable to recover it. 00:31:20.687 [2024-06-10 11:38:17.729125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.687 [2024-06-10 11:38:17.729133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.729446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.729454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.729827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.729836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.730049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.730057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.730375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.730383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.730592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.730599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.730783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.730792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.731122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.731130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.731343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.731351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.731674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.731682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.731926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.731935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.732221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.732229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.732405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.732413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.732749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.732757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.733077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.733086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.733296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.733304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.733631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.733639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.733963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.733972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.734306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.734314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.734624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.734633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.734847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.734857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.735179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.735188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.735468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.735477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.735753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.735761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.736121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.736129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.736414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.736423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.736754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.736763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.737021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.737029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.737344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.737352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.737680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.737688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.737904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.737911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.738203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.738211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.738390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.738398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.688 qpair failed and we were unable to recover it. 00:31:20.688 [2024-06-10 11:38:17.738726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.688 [2024-06-10 11:38:17.738734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.739050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.739060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.739279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.739287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.739602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.739610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.739930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.739940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.740234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.740242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.740448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.740455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.740780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.740788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.741007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.741015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.741346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.741355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.741687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.741695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.742012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.742021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.742331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.742339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.742639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.742648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.742965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.742974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.743195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.743203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.743461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.743469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.743775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.743784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.744094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.744103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.744410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.744418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.744668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.744676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.744906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.744915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.745102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.745109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.745301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.745311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.745629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.745637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.745936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.745945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.746252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.746260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.746594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.746603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.746852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.746860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.747062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.747070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.747386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.747395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.747609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.747618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.747899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.747916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.689 qpair failed and we were unable to recover it. 00:31:20.689 [2024-06-10 11:38:17.748215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.689 [2024-06-10 11:38:17.748223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.748408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.748416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.748740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.748748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.749073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.749081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.749374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.749382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.749574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.749582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.749896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.749904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.750252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.750554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.750564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.750895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.750903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.751250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.751258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.751681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.751917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.751925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.752291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.752298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.752612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.752620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.752899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.752907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.753243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.753252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.753567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.753575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.753765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.753773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.754075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.754083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.754431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.754440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.754743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.754753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.755047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.755055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.755363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.755371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.755686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.755696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.755924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.755932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.756256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.756264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.756583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.756593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.756901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.756909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.757245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.757254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.757563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.757572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.757790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.757799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.757973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.757981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.690 [2024-06-10 11:38:17.758324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.690 [2024-06-10 11:38:17.758332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.690 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.758646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.758655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.758936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.758944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.759280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.759288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.759654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.759662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.759974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.759984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.760290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.760298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.760593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.760602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.760888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.760896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.761211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.761220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.761526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.761534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.761876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.761884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.762140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.762148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.762478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.762486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.762801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.762809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.762894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.762902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.763239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.763248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.763551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.763559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.763734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.763742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.764057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.764065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.764392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.764400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.764687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.764696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.765011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.765019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.765375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.765382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.765723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.765731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.765869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.765877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.766118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.766126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.766341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.766349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.766665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.766674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.766907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.766916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.767256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.767264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.767565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.767573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.767878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.767887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.691 [2024-06-10 11:38:17.768185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.691 [2024-06-10 11:38:17.768194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.691 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.768414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.768422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.768739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.768748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.769154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.769163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.769385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.769393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.769711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.769719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.770056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.770064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.770375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.770383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.770696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.770935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.770943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.771247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.771255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.771471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.771480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.771723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.771732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.772052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.772062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.772297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.772307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.772623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.772632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.772872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.772880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.773210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.773218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.773526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.773534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.773784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.773792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.774101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.774110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.774426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.774435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.774656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.774665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.774962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.774970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.775304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.775312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.775498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.775506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.775638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.775646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.775857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.775866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.692 [2024-06-10 11:38:17.776261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.692 [2024-06-10 11:38:17.776269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.692 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.776489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.776497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.776834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.776842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.777175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.777183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.777446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.777455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.777764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.777772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.778100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.778108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.778434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.778443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.778744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.778752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.779128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.779136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.779476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.779485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.779793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.779801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.780021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.780029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.780352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.780360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.780550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.780558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.780603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.780611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.780906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.780914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.781244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.781253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.781456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.781464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.781681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.781688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.782009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.782017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.782250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.782259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.782482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.782490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.782790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.782799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.783119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.783127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.783444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.783452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.783712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.783720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.783922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.783930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.784247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.784255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.784453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.784461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.784806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.784813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.785145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.785154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.785451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.785460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.693 [2024-06-10 11:38:17.785650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.693 [2024-06-10 11:38:17.785658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.693 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.785954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.785964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.786281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.786290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.786607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.786616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.786700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.786707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.786949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.786957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.787273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.787280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.787456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.787464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.787686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.787694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.788029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.788038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.788418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.788427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.788606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.788615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.788812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.788823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.789078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.789087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.789360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.789368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.789694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.789702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.790018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.790027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.790223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.790230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.790539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.790547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.790756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.790765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.791121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.791130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.791445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.791454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.791770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.791778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.792095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.792103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.792453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.792461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.792780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.792788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.793036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.793044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.793367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.793375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.793710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.793720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.794055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.794063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.794236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.794243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.794569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.794577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.794871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.794881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.795232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.694 [2024-06-10 11:38:17.795240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.694 qpair failed and we were unable to recover it. 00:31:20.694 [2024-06-10 11:38:17.795473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.795481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.795782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.795791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.796115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.796124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.796470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.796479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.796652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.796660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.796825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.796835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.797143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.797151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.797502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.797511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.797803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.797813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.798120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.798128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.798360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.798368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.798647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.798655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.798868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.798876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.799108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.799116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.799434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.799443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.799761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.799769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.799976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.799984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.800282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.800290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.800600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.800609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.800830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.800838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.801149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.801158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.801529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.801537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.801809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.801817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.801918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.801927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.802231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.802239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.802539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.802549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.802942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.802950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.803159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.803167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.803495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.803504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.803841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.803851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.804077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.804085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.804381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.804390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.804698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.695 [2024-06-10 11:38:17.804706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.695 qpair failed and we were unable to recover it. 00:31:20.695 [2024-06-10 11:38:17.804938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.804947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.805227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.805235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.805425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.805433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.805744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.805753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.806154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.806162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.806438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.806448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.806633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.806642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.806869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.806877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.807095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.807103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.807378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.807386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.807614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.807622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.807927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.807937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.808091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.808100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.808278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.808287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.808616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.808628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.808826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.808833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.809135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.809142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.809433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.809440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.809772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.809780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.809907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.809915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.810228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.810235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.810555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.810564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.810789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.810796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.811146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.811154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.811440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.811448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.811754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.811761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.811886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.811897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.812222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.812230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.812537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.812545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.812850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.812858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.813197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.813205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.813469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.813477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.813817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.696 [2024-06-10 11:38:17.813827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.696 qpair failed and we were unable to recover it. 00:31:20.696 [2024-06-10 11:38:17.814121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.814129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.814297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.814304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.814582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.814590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.814964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.814971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.815296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.815303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.815613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.815621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.815722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.815729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.815956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.815964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.816195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.816203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.816524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.816531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.816827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.816835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.817153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.817160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.817466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.817473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.817674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.817682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.817971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.817979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.818329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.818337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.818522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.818530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.818727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.818734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.819027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.819035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.819383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.819390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.819696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.819703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.820051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.820061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.820306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.820313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.820613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.820621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.820967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.820975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.821248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.821256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.821573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.821580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.821764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.821772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.697 [2024-06-10 11:38:17.822056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.697 [2024-06-10 11:38:17.822064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.697 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.822364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.822372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.822488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.822495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.822698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.822705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.823018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.823025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.823369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.823377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.823725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.823732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.823910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.823918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.824126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.824134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.824413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.824420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.824772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.824780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.825021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.825029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.825220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.825228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.825548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.825555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.825899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.825907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.826039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.826046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.826211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.826218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.826548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.826556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.826890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.826898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.827122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.827130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.827445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.827453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.827786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.827793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.828120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.828128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.828453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.828461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.828767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.828775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.829077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.829085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.829416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.829424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.829735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.829743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.830016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.830024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.830223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.830230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.830444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.830452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.830723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.830731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.830944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.830952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.698 [2024-06-10 11:38:17.831129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.698 [2024-06-10 11:38:17.831138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.698 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.831436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.831444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.831506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.831513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.831787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.831794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.832190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.832198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.832531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.832539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.832761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.832768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.832979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.832987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.833306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.833314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.833649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.833656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.833989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.833996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.834339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.834346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.834662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.834669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.834984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.834992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.835209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.835217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.835515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.835522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.835867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.835875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.836190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.836198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.836559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.836567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.836859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.836866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.837083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.837090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.837398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.837406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.837591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.837599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.837924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.837931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.838226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.838233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.838591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.838598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.838932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.838940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.839249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.839256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.839552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.839560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.839888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.839895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.840176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.840184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.840552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.840559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.840790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.840797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.699 [2024-06-10 11:38:17.841124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.699 [2024-06-10 11:38:17.841131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.699 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.841436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.841444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.841694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.841701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.842086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.842094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.842402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.842409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.842736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.842743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.842921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.842929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.843169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.843176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.843494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.843501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.843795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.843803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.844078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.844086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.844395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.844402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.844716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.844723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.844909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.844916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.845234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.845240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.845553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.845561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.845893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.845900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.846118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.846125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.846427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.846434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.846603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.846610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.846883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.846890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.847123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.847130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.847515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.847522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.847829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.847837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.848154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.848160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.848465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.848472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.848758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.848765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.849096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.849103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.849496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.849504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.849686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.849693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.849904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.849911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.850228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.850235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.850572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.850579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.850887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.700 [2024-06-10 11:38:17.850894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.700 qpair failed and we were unable to recover it. 00:31:20.700 [2024-06-10 11:38:17.851200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.851209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.851465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.851472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.851787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.851794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.852081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.852089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.852404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.852411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.852726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.852734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.853080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.853087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.853423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.853430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.853672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.853679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.854004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.854011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.854337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.854344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.854571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.854578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.854929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.854936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.855305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.855313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.855604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.855611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.855953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.855960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.856264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.856272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.856571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.856578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.856874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.856882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.857099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.857105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.857414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.857421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.857736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.857743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.858065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.858073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.858410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.858417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.858631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.858638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.858952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.858959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.859267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.859275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.859592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.859599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.859939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.859946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.860259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.860266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.860581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.860588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.860987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.860993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.701 qpair failed and we were unable to recover it. 00:31:20.701 [2024-06-10 11:38:17.861335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.701 [2024-06-10 11:38:17.861341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.861660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.861668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.861813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.861820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.862159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.862166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.862497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.862504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.862688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.862696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.863011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.863018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.863344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.863351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.863722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.863731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.864040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.864047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.864436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.864443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.864808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.864815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.865115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.865122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.865439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.865446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.865739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.865746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.866091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.866098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.866394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.866401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.866609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.866615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.866939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.866946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.867252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.867259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.867401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.867408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.867782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.867788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.867968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.867976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.868289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.868296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.868598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.868606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.868783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.868790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.869018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.869025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.869354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.869361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.869656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.869663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.870002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.870009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.870194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.870201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.870470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.870477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.870676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.870682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.702 qpair failed and we were unable to recover it. 00:31:20.702 [2024-06-10 11:38:17.871016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.702 [2024-06-10 11:38:17.871023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.871347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.871354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.871665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.871672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.871975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.871983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.872207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.872213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.872516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.872523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.872836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.872843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.873204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.873210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.873525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.873532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.873768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.873775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.874086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.874094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.874295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.874301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.874545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.874552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.874878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.874885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.875180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.875187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.875400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.875408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.875730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.875736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.875918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.875926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.703 [2024-06-10 11:38:17.876245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.703 [2024-06-10 11:38:17.876252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.703 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.876575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.876584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.876913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.876921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.877158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.877165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.877475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.877482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.877665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.877672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.877959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.877966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.878287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.878294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.878610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.878616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.878914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.878922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.879268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.879275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.879583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.879590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.879908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.879916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-06-10 11:38:17.880235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.982 [2024-06-10 11:38:17.880243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.880562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.880569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.880872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.880879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.881188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.881194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.881507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.881860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.881867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.882172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.882179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.882482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.882488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.882828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.882836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.883163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.883170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.883504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.883511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.883724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.883730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.884048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.884055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.884345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.884352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.884670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.884678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.885005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.885012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.885382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.885388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.885682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.885689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.886001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.886008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.886375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.886382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.886683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.886690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.887110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.887117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.887410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.887418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.887733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.887739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.887920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.887929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.888281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.888288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.888628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.888634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.888956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.888963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.889227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.889233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.889579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.889586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.889879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.889885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.983 [2024-06-10 11:38:17.890152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.983 [2024-06-10 11:38:17.890158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.983 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.890352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.890359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.890566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.890834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.890840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.891052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.891060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.891385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.891392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.891664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.891679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.891988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.891996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.892314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.892322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.892433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.892440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.892648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.892655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.892996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.893003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.893330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.893337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.893683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.893690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.893872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.893879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.894257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.894264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.894462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.894469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.894790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.894797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.895023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.895030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.895255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.895262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.895575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.895581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.895761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.896079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.896087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.896401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.896409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.896722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.896730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.897123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.897130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.897450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.897458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.897656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.897664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.897847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.897855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.898198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.898205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.898545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.898551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.898843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.898850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.899158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.984 [2024-06-10 11:38:17.899165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.984 qpair failed and we were unable to recover it. 00:31:20.984 [2024-06-10 11:38:17.899480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.899490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.899818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.899839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.900052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.900059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.900370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.900376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.900715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.900721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.900893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.900900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.901228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.901236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.901567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.901573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.901877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.901884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.902191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.902198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.902364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.902371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.902676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.902683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.902996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.903003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.903306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.903313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.903631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.903637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.903942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.903950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.904282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.904289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.904639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.904646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.904866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.904872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.905259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.905265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.905559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.905567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.905875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.905882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.906175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.906182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.906495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.906501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.906875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.906882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.907193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.907200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.907512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.985 [2024-06-10 11:38:17.907518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.985 qpair failed and we were unable to recover it. 00:31:20.985 [2024-06-10 11:38:17.907893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.907900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.908194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.908201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.908514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.908520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.908824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.908832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.909165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.909171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.909386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.909393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.909715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.909722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.909901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.909909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.910196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.910203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.910503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.910509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.910827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.910834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.911183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.911189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.911507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.911513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.911815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.911828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.912152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.912160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.912542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.912548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.912841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.912849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.913186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.913192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.913484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.913492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.913806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.913812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.914109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.914117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.914451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.914457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.914749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.914757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.915083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.915090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.915398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.915405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.915629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.915636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.915930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.915938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.916241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.916247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.916558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.916565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.916901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.916907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.986 [2024-06-10 11:38:17.917231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.986 [2024-06-10 11:38:17.917238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.986 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.917554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.917561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.917869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.917877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.918192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.918199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.918497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.918504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.918683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.918690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.919018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.919025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.919360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.919367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.919665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.919672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.919757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.919763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.919939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.919946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.920268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.920275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.920575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.920832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.920839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.921137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.921143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.921445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.921451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.921644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.921650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.921957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.921964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.922301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.922308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.922594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.922601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.922894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.922902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.923208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.923215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.923588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.923595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.923881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.923890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.924219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.924225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.924532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.924539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.924878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.924885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.925267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.925274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.925498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.925505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.925808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.925814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.926142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.926149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.926482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.926489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.987 [2024-06-10 11:38:17.926814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.987 [2024-06-10 11:38:17.926820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.987 qpair failed and we were unable to recover it. 00:31:20.988 [2024-06-10 11:38:17.927120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.988 [2024-06-10 11:38:17.927127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.988 qpair failed and we were unable to recover it. 00:31:20.988 [2024-06-10 11:38:17.927439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.988 [2024-06-10 11:38:17.927446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.988 qpair failed and we were unable to recover it. 00:31:20.988 [2024-06-10 11:38:17.927652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.988 [2024-06-10 11:38:17.927659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.988 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.927966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.927973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.928287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.928294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.928638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.928645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.928858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.928865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.929065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.929073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.929382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.929389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.929764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.929770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.930065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.930074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.930406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.930412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.930753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.930760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.931075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.931083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.931258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.931265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.931544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.931551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.931774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.931782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.932098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.932105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.932404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.932412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.932749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.932756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.933057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.933064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.933385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.933391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.933721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.933728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.934048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.934055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.934374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.934380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.934681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.934688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.935001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.935008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.935338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.935345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.935682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.935689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.936025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.936032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.936332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.989 [2024-06-10 11:38:17.936342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.989 qpair failed and we were unable to recover it. 00:31:20.989 [2024-06-10 11:38:17.936427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.936434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Read completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 Write completed with error (sct=0, sc=8) 00:31:20.990 starting I/O failed 00:31:20.990 [2024-06-10 11:38:17.936693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:20.990 [2024-06-10 11:38:17.937064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.937098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.937445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.937456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.937776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.937786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.938270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.938304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.938658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.938669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.939088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.939122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.939487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.939498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.939829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.939839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.940267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.940301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.940688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.940699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.941103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.941137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.941483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.941493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.941803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.941813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.942128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.942138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.942476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.942485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.942730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.942738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.943091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.943100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.943398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.943407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.943630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.943647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.943949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.943959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.944290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.944298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.990 [2024-06-10 11:38:17.944598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.990 [2024-06-10 11:38:17.944608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.990 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.944961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.944971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.945265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.945274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.945589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.945598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.945897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.945907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.946037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.946046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.946365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.946374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.946669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.946678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.947013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.947022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.947314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.947324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.947642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.947650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.948000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.948009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.948272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.948281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.948602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.948610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.948915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.948924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.949247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.949256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.949567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.949575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.949900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.949909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.950236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.950244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.950455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.950463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.950669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.950678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.950998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.951007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.951319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.951328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.951659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.951668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.952084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.952092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.952424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.952433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.952781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.952790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.953135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.953146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.953466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.953475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.953791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.953799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.954147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.954156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.954465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.954473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.954790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.954799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.955112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.991 [2024-06-10 11:38:17.955122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.991 qpair failed and we were unable to recover it. 00:31:20.991 [2024-06-10 11:38:17.955435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.955443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.955743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.955753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.956006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.956015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.956344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.956355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.956669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.956677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.957017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.957027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.957319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.957327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.957711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.957719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.958050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.958060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.958378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.958570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.958579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.958908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.958918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.959216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.959225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.959538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.959547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.959847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.959857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.960189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.960198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.960536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.960544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.960904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.960913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.961227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.961235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.961574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.961583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.961884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.961893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.962204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.962212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.962495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.962505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.962837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.962846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.963140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.963150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.963471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.963479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.963699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.963707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.964026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.964035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.964344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.964354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.964568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.964576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.964919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.964928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.965265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.965273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.965562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.992 [2024-06-10 11:38:17.965572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.992 qpair failed and we were unable to recover it. 00:31:20.992 [2024-06-10 11:38:17.965756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.965766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.966042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.966051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.966382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.966390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.966545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.966554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.966846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.966855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.967019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.967029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.967386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.967394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.967693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.967702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.968032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.968041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.968377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.968386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.968718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.968730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.969063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.969072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.969405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.969413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.969731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.969739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.970049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.970058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.970395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.970404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.970709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.970717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.971046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.971055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.971272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.971280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.971596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.971604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.972024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.972034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.972336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.972345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.972678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.972687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.973042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.973051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.973342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.973351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.973700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.973709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.973886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.973895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.974231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.974240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.974576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.974584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.974906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.974915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.975140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.975148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.975466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.975475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.993 [2024-06-10 11:38:17.975856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.993 [2024-06-10 11:38:17.975865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.993 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.976175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.976183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.976320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.976329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.976582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.976591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.976919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.976928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.977218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.977227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.977604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.977612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.977943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.977952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.978272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.978280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.978593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.978601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.978829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.978839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.979192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.979200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.979533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.979541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.979856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.979866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.980198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.980206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.980505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.980514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.980817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.980829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.981104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.981113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.981297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.981309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.981637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.981646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.981961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.981970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.982289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.982298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.982633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.982642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.982976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.982986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.983225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.983233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.983448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.983456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.983692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.983701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.984015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.984025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.994 [2024-06-10 11:38:17.984361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.994 [2024-06-10 11:38:17.984370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.994 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.984604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.984613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.984963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.984971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.985308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.985317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.985540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.985549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.985824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.985834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.986177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.986186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.986437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.986445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.986779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.986787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.987128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.987137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.987446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.987454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.987768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.987777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.988114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.988123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.988437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.988446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.988768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.988777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.989114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.989123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.989290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.989299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.989582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.989591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.989923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.989932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.990219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.990228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.990543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.990551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.990859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.990867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.991223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.991232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.991525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.991534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.991847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.991856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.992040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.992049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.992443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.992452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.992740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.992749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.992939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.992948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.993272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.993282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.993602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.993612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.993921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.993930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.994271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.994280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.995 [2024-06-10 11:38:17.994591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.995 [2024-06-10 11:38:17.994599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.995 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.994893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.994902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.995102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.995111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.995439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.995447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.995758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.995767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.996122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.996305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.996315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.996590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.996599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.996804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.996813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.997176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.997185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.997498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.997508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.997838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.997848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.998156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.998165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.998500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.998509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.998833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.998842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.999026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.999036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.999368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.999376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:17.999720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:17.999729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.000061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.000071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.000382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.000391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.000761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.000769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.001063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.001073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.001251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.001260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.001533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.001542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.001737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.001746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.002061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.002070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.002399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.002408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.002773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.002783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.002978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.002988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.003291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.003299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.003616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.003624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.003964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.003973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.004279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.004287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.996 [2024-06-10 11:38:18.004604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.996 [2024-06-10 11:38:18.004612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.996 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.004831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.004840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.005139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.005147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.005440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.005449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.005771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.005784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.006074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.006084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.006399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.006407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.006747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.006756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.007074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.007083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.007412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.007421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.007756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.007764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.007995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.008004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.008381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.008390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.008688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.008697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.008994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.009003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.009317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.009326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.009638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.009647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.009856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.009865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.010164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.010173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.010486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.010495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.010837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.010845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.011159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.011168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.011473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.011481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.011813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.011824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.012148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.012156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.012466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.012475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.012770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.012780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.013094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.013103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.013445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.013453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.013736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.013744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.014066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.014075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.014375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.014385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.014716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.014725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.015049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.997 [2024-06-10 11:38:18.015059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.997 qpair failed and we were unable to recover it. 00:31:20.997 [2024-06-10 11:38:18.015390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.015400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.015775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.015784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.016009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.016018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.016336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.016345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.016558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.016567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.016890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.016900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.017289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.017299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.017519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.017528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.017715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.017725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.018033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.018043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.018365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.018377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.018634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.018643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.018964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.018974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.019287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.019297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.019482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.019492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.019828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.019838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.020056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.020066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.020411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.020421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.020511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.020520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.020804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.020814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.021137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.021147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.021484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.021494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.021836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.021846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.022193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.022203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.022517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.022527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.022866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.022884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.023216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.023225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.023552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.023562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.023893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.023903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.024241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.024250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.024568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.024578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.024870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.024880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.025184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.998 [2024-06-10 11:38:18.025194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.998 qpair failed and we were unable to recover it. 00:31:20.998 [2024-06-10 11:38:18.025526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.025535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.025902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.025912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.026304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.026314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.026628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.026638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.026975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.026985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.027325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.027335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.027674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.027683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.028004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.028014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.028151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.028161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.028461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.028470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.028805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.028814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.029125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.029135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.029365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.029375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.029562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.029573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.029891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.029901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.030212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.030222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.030553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.030563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.030779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.030790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.031129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.031139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.031452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.031462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.031802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.031812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.032112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.032122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.032435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.032445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.032758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.032768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.033092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.033101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.033314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.033323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.033601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.033610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.033940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.033950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:20.999 qpair failed and we were unable to recover it. 00:31:20.999 [2024-06-10 11:38:18.034267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.999 [2024-06-10 11:38:18.034276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.034503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.034512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.034844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.034854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.035204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.035213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.035392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.035402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.035763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.035772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.035972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.035982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.036318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.036327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.036516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.036526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.036840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.036849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.037193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.037203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.037524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.037837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.037847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.038172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.038182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.038496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.038506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.038869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.038879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.039215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.039225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.039418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.039427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.039756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.039765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.040032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.040042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.040354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.040363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.040693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.040703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.041006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.041015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.041423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.041432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.041768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.041777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.042086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.042096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.042229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.042238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.042520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.042529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.042860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.042869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.043239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.043250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.043582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.043592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.043908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.043918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.044124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.044134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.000 [2024-06-10 11:38:18.044445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.000 [2024-06-10 11:38:18.044455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.000 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.044720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.044729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.045027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.045037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.045375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.045385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.045590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.045599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.045899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.045909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.046082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.046092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.046429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.046438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.046752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.046761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.047093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.047103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.047287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.047297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.047491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.047500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.047695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.047704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.048012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.048022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.048328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.048337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.048656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.048665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.048891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.048901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.049223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.049232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.049554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.049563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.049873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.049883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.050196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.050206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.050434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.050444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.050659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.050668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.050879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.050888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.051186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.051195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.051537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.051546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.051842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.051852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.052068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.052078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.052400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.052409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.052589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.052599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.052876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.052886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.053086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.053096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.053303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.053313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.001 [2024-06-10 11:38:18.053632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.001 [2024-06-10 11:38:18.053642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.001 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.053964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.053974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.054271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.054280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.054597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.054608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.054934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.054943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.055230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.055239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.055443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.055452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.055750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.055759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.056100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.056110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.056306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.056316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.056626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.056635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.056829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.056840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.057185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.057194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.057537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.057546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.057858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.057868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.058180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.058189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.058523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.058532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.058858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.058868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.059098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.059108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.059425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.059435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.059776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.059785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.060111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.060121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.060442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.060451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.060789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.060798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.061106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.061116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.061377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.061387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.061698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.061707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.062044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.062054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.062386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.062396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.062618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.062628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.062937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.062948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.063282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.063292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.063598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.063607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.002 [2024-06-10 11:38:18.064029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.002 [2024-06-10 11:38:18.064038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.002 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.064343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.064353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.064687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.064697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.065021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.065030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.065345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.065354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.065623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.065633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.065995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.066005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.066192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.066202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.066518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.066528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.066828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.066837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.067167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.067177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.067493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.067503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.067809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.067818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.068125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.068135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.068444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.068454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.068670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.068679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.069002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.069012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.069332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.069341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.069656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.069666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.069989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.069999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.070319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.070328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.070643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.070652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.070967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.070978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.071271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.071280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.071468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.071479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.071812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.071825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.072124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.072134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.072293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.072303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.072640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.072649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.072906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.072916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.073239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.073249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.073558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.073567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.073754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.073763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.003 [2024-06-10 11:38:18.074097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.003 [2024-06-10 11:38:18.074107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.003 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.074423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.074432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.074627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.074637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.074818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.074830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.075035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.075046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.075229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.075239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.075544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.075554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.075870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.075879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.076193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.076203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.076414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.076423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.076631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.076640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.076830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.076840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.077136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.077146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.077336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.077345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.077696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.077706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.078081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.078091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.078385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.078394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.078576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.078585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.078883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.078892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.079203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.079212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.079591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.079600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.079897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.079914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.080244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.080253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.080536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.080546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.080858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.080867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.081182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.081191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.081529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.081538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.081856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.081866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.082181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.004 [2024-06-10 11:38:18.082190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.004 qpair failed and we were unable to recover it. 00:31:21.004 [2024-06-10 11:38:18.082406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.082415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.082737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.082745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.083052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.083061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.083374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.083382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.083682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.083691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.084015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.084024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.084344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.084353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.084668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.084676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.085009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.085018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.085362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.085372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.085683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.085691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.085985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.085994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.086277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.086285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.086602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.086611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.086922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.086931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.087227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.087238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.087550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.087560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.087898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.087907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.088218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.088227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.088523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.088531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.088700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.088709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.089037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.089046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.089343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.089353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.089669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.089677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.089980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.089990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.090356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.090365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.090667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.090676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.090983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.090992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.091304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.091312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.091645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.091654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.091979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.091987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.092314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.092322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.005 [2024-06-10 11:38:18.092678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.005 [2024-06-10 11:38:18.092686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.005 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.092979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.092987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.093197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.093205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.093529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.093539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.093915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.093924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.094137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.094146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.094461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.094470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.094785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.094793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.095083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.095091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.095390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.095398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.095746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.095755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.096091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.096100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.096312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.096321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.096623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.096631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.096945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.096953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.097279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.097288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.097474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.097483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.097787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.097795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.098124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.098134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.098457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.098466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.098781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.098790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.098991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.099000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.099316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.099325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.099648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.099659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.099974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.099983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.100222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.100232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.100470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.100479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.100773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.100782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.101091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.101100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.101434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.101443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.101672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.101681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.101893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.101903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.102221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.102231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.006 qpair failed and we were unable to recover it. 00:31:21.006 [2024-06-10 11:38:18.102565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.006 [2024-06-10 11:38:18.102574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.102882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.102892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.103219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.103228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.103445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.103454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.103829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.103838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.104208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.104217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.104429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.104438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.104660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.104669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.105027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.105037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.105419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.105429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.105735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.105744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.106064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.106074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.106272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.106281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.106586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.106596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.106820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.106840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.107153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.107162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.107234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.107243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.107457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.107481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.107805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.107813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.108219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.108247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.108582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.108591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.109003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.109031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.109360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.109369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.109592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.109599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.110017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.110045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.110419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.110428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.110643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.110650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.110947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.110955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.111211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.111218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.111501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.111508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.111833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.111844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.112167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.112175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.112499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.112507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.007 [2024-06-10 11:38:18.112837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.007 [2024-06-10 11:38:18.112844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.007 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.113119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.113127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.113442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.113449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.113762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.113769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.113955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.113963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.114278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.114286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.114603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.114611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.114941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.114948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.115282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.115288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.115595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.115602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.115916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.115924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.116226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.116234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.116572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.116579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.116899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.116906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.117271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.117278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.117464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.117472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.117753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.118094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.118101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.118273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.118280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.118555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.118561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.118875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.118882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.119172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.119179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.119479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.119486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.119805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.119812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.120145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.120152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.120446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.120453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.120660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.120668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.120976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.120983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.121294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.121301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.121611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.121618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.121999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.122006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.122322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.122329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.122618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.122625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.008 [2024-06-10 11:38:18.122981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.008 [2024-06-10 11:38:18.122988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.008 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.123289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.123295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.123621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.123628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.123954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.123962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.124275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.124284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.124618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.124624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.124953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.124960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.125293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.125300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.125537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.125544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.125701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.125708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.126011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.126018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.126350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.126357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.126583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.126590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.126892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.126931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.127247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.127254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.127498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.127505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.127778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.127785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.128103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.128117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.128433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.128440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.128769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.128777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.129089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.129096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.129429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.129437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.129755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.129762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.130117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.130124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.130506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.130514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.130729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.130736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.131010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.131017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.131233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.131240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.131563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.131570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.131876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.131883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.132197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.132204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.132536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.132543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.132770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.132777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-10 11:38:18.133106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-10 11:38:18.133113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.133380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.133387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.133718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.133725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.134052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.134059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.134354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.134361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.134694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.134701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.135016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.135328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.135334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.135633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.135641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.135972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.135979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.136318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.136324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.136645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.136653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.137018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.137025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.137366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.137373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.137703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.137709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.138011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.138018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.138326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.138334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.138597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.138604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.138833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.138841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.139139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.139146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.139439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.139446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.139727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.139733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.140042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.140049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.140238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.140245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.140579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.140585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.140899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.140906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.141235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.141241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.141467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.141473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.141751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-10 11:38:18.141758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-10 11:38:18.142097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.142104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.142399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.142406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.142621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.142627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.142953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.142960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.143271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.143278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.143462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.143469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.143789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.143795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.144104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.144112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.144429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.144436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.144738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.144746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.145067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.145074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.145413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.145421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.145735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.145742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.146050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.146057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.146281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.146288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.146671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.146678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.147008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.147014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.147310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.147318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.147617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.147624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.147869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.147876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.148178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.148185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.148527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.148534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.148899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.148907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.149070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.149077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.149307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.149314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.149635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.149642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.149891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.149898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.150212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.150219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.150536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.150544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.150861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.150867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.151255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.151262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.151427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.151434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.151774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-10 11:38:18.151782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-10 11:38:18.152087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.152094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.152409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.152416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.152609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.152616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.152890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.152898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.153228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.153235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.153586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.153594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.153779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.153787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.153985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.153992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.154278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.154285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.154609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.154616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.154935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.154942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.155317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.155324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.155534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.155542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.155855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.155863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.156186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.156194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.156512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.156519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.156826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.156833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.157170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.157176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.157470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.157478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.157786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.157793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.158134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.158142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.158510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.158518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.158834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.158840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.159044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.159051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.159379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.159386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.159758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.159765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.160040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.160049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.160367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.160373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.160698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.160705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.161035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.161045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.161265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.161272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.161510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.161517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.161842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-10 11:38:18.161849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-10 11:38:18.162201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.162209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.162540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.162547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.162928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.162935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.163231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.163239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.163556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.163564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.163867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.163876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.164208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.164215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.164512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.164521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.164840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.164848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.165215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.165222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.165551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.165559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.165887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.165894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.166230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.166237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.166472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.166480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.166704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.166711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.167112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.167119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.167413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.167420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.167738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.167745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.168076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.168083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.168421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.168427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.168737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.168744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.169110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.169117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.169411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.169419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.169763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.169770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.170044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.170051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.170373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.170380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.170674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.170682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.171020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.171027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.171397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.171403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.171687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.171695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.171916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.171924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.172152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.172159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.172491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.172497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-10 11:38:18.172835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-10 11:38:18.172842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.173162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.173169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.173503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.173510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.173851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.173859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.174173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.174180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.174492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.174498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.174873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.174879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.175194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.175200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.175499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.175506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.175715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.175721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.176004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.176011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.176191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.176199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.176509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.176516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.176811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.176819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.177149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.177155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.177457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.177465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.177761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.177768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.177957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.177965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.178270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.178276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.178491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.178497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.178816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.178825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.179193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.179199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.179549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.179555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.179900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.179907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.180235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.180242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.180549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.180555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.180888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.180895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.181209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.181215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.181542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.181550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.181866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.181873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.182189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.182196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.182514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.182521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.182736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.182743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-10 11:38:18.183066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-10 11:38:18.183073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.183418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.183425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.183740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.183747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.184055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.184063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.184416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.184422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.184603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.184610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.184963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.184970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.185339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.185346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.185637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.185644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.185952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.185959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.186268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.186277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.186590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.186596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.186930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.186937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.187249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.187256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.187581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.187588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.187931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.187938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.188245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.188251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.188596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.188602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.188900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.188907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.189225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.189232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.189535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.189542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.189852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.189859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.190177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.190184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.190328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.190335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.190659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.190666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.190967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.190974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.191295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.191302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.191603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.191611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-10 11:38:18.191849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-10 11:38:18.191855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.192175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.192183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.192483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.192491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.192669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.192677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.192969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.192976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.193225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.193232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.193544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.193551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.193866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.193873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.194212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.194219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.194544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.194551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.194783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.194790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.195085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.195093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.195411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.195418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.195760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.195766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.196094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.196107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.196331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.196337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.196636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.196643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.196983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.196990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.197285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.197293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.197607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.197614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.197938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.197946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.198322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.198329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.198611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.198620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.198802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.198810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.199161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.199168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.199347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.199355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.199655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.199663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.199969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.199976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.292 [2024-06-10 11:38:18.200301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.292 [2024-06-10 11:38:18.200309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.292 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.200530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.200538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.200853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.200859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.201227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.201242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.201409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.201417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.201629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.201636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.201907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.201913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.202231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.202238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.202553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.202559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.202892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.202899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.203083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.203091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.203449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.203455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.203679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.203685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.203995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.204002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.204340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.204346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.204684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.204691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.205012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.205019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.205355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.205362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.205585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.205592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.205933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.205939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.206196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.206203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.206539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.206546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.206815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.206842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.207149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.207156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.207462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.207469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.207775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.207781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.208062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.208069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.208381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.208388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.208610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.208617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.208953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.208960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.209270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.209278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.209434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.209441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.209762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.209769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.293 qpair failed and we were unable to recover it. 00:31:21.293 [2024-06-10 11:38:18.210110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.293 [2024-06-10 11:38:18.210117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.210423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.210432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.210663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.210671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.210995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.211002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.211296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.211304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.211619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.211625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.211940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.211947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.212302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.212309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.212639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.212646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.212970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.212977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.213370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.213377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.213692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.213698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.214018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.214025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.214342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.214349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.214688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.214694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.215013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.215020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.215308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.215314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.215644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.215651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.215977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.215984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.216312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.216319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.216656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.216663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.216975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.216982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.217314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.217321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.217634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.217641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.217945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.217952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.218322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.218329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.218651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.218658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.218833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.218841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.219161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.219168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.219460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.219467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.219785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.219791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.220135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.220142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.220479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.294 [2024-06-10 11:38:18.220486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.294 qpair failed and we were unable to recover it. 00:31:21.294 [2024-06-10 11:38:18.220676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.220683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.221003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.221010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.221291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.221298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.221596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.221604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.221940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.221947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.222242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.222249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.222568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.222574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.222759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.222766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.223096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.223411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.223418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.223647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.223654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.223994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.224001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.224337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.224343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.224657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.224663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.224896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.224902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.225220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.225226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.225548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.225555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.225868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.225875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.226192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.226199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.226540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.226547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.226857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.226863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.227166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.227173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.227488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.227494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.227832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.227839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.228130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.228138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.228453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.228460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.228756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.228771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.229158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.229165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.229473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.229479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.229712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.229718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.230004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.230011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.230321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.230328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.230511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.295 [2024-06-10 11:38:18.230519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.295 qpair failed and we were unable to recover it. 00:31:21.295 [2024-06-10 11:38:18.230836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.230844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.231156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.231163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.231507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.231516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.231811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.231818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.232118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.232125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.232436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.232443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.232778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.232785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.233088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.233096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.233332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.233339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.233642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.233649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.234027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.234034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.234347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.234354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.234697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.234703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.235017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.235025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.235329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.235336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.235666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.235673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.235999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.236006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.236325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.236331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.236646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.236653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.236879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.236886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.237116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.237122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.237314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.237320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.237655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.237662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.237965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.237973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.238286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.238293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.238596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.238603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.238939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.238946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.239293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.239299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.239618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.239625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.239949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.239956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.240281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.240287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.240625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.240631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.240945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.296 [2024-06-10 11:38:18.240952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.296 qpair failed and we were unable to recover it. 00:31:21.296 [2024-06-10 11:38:18.241296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.241303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.241500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.241507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.241818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.241827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.242144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.242151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.242463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.242471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.242693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.242700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.243014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.243022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.243247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.243253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.243564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.243571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.243893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.243902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.244294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.244301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.244585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.244593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.244918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.244925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.245261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.245268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.245601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.245608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.245947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.245954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.246262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.246268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.246593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.246599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.246939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.246946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.247133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.247140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.247455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.247462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.247748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.247755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.248095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.248102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.248440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.248448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.248709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.248717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.248941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.248948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.297 qpair failed and we were unable to recover it. 00:31:21.297 [2024-06-10 11:38:18.249259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.297 [2024-06-10 11:38:18.249267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.249583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.249590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.249898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.249905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.250204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.250211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.250526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.250532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.250872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.250879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.250969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.250976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.251374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.251380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.251686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.251693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.252033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.252040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.252213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.252221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.252550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.252557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.252881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.252889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.253196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.253202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.253506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.253513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.253826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.253833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.254206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.254213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.254503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.254511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.254831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.254838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.255218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.255225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.255515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.255522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.255886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.255893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.256187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.256194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.256500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.256510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.256849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.256857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.257169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.257176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.257479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.257487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.257622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.257634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.257943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.258282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.258289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.258605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.258612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.258918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.258925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.259224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.259231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.298 qpair failed and we were unable to recover it. 00:31:21.298 [2024-06-10 11:38:18.259555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.298 [2024-06-10 11:38:18.259562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.259900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.259908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.260175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.260181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.260391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.260397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.260740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.260747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.261054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.261062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.261376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.261383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.261684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.261691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.261936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.261942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.262234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.262241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.262554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.262562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.262874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.262881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.263183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.263190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.263501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.263508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.263885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.263891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.264190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.264197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.264375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.264383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.264757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.264764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.265119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.265126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.265470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.265477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.265818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.265828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.266051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.266057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.266345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.266352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.266682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.266689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.267019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.267027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.267403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.267411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.267736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.267743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.268077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.268084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.268461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.268468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.268638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.268645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.268982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.268992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.269327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.269335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.269666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.269673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.299 qpair failed and we were unable to recover it. 00:31:21.299 [2024-06-10 11:38:18.269919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-06-10 11:38:18.269927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.270141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.270149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.270481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.270489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.270830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.270838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.271179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.271187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.271501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.271508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.271844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.271852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.272080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.272087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.272405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.272412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.272712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.272719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.273043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.273051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.273351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.273358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.273672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.273680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.273992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.274000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.274339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.274346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.274533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.274540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.274725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.274733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.274975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.274983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.275321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.275328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.275663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.275671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.275993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.276000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.276293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.276300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.276628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.276636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.276848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.276856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.277121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.277128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.277461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.277469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.277757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.277764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.278126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.278133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.278492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.278499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.278733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.278741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.279052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.279060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.279395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.279403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.300 qpair failed and we were unable to recover it. 00:31:21.300 [2024-06-10 11:38:18.279722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.300 [2024-06-10 11:38:18.279729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.280069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.280077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.280414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.280421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.280751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.280759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.281063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.281071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.281391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.281400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.281738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.281745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.282069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.282077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.282394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.282401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.282684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.282691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.282992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.283000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.283301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.283309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.283633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.283641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.283924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.283931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.284251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.284259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.284481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.284488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.284792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.284799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.284990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.284998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.285325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.285333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.285403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.285410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.285699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.285706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.286012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.286020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.286327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.286334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.286681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.286689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.287019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.287026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.287329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.287336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.287649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.287657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.287842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.287850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.288162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.288169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.288488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.288496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.288815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.288824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.289152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.289159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.289473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.301 [2024-06-10 11:38:18.289481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.301 qpair failed and we were unable to recover it. 00:31:21.301 [2024-06-10 11:38:18.289796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.289804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.290046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.290053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.290241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.290248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.290577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.290584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.290900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.290907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.291182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.291189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.291368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.291375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.291688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.291696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.291881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.291889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.292214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.292221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.292408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.292416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.292597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.292605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.292911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.292921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.293077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.293085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.293429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.293436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.293735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.293743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.294022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.294030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.294240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.294248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.294555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.294562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.294904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.294911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.295225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.295232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.295562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.295569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.295833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.295841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.302 [2024-06-10 11:38:18.296022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.302 [2024-06-10 11:38:18.296030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.302 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.296404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.296411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.296729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.296736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.296968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.296975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.297213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.297221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.297536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.297544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.297752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.297759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.298081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.298088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.298428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.298435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.298748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.298756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.298970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.298978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.299325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.299332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.299662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.299670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.299986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.299994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.300328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.300335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.300687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.300694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.301027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.301035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.301369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.301376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.301676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.301683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.302025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.302033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.302363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.302370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.302705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.302712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.303045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.303052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.303353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.303361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.303685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.303692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.304009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.304017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.304249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.304256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.304561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.304569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.304902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.304910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.305239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.305248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.305573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.305580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.305728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.303 [2024-06-10 11:38:18.305736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.303 qpair failed and we were unable to recover it. 00:31:21.303 [2024-06-10 11:38:18.306124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.306132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.306452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.306459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.306777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.306784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.307099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.307107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.307291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.307299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.307596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.307603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.307931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.307939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.308277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.308285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.308492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.308499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.308796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.308804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.309117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.309125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.309467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.309474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.309810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.309817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.310152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.310160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.310476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.310483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.310787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.310795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.311086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.311093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.311312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.311320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.311638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.311645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.311884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.311892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.312206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.312213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.312397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.312405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.312717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.312725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.313037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.313044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.313379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.313387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.313699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.313707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.313930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.313938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.314240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.314248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.314536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.314543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.314756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.314764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.315093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.315101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.315306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.315314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.315504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.304 [2024-06-10 11:38:18.315512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.304 qpair failed and we were unable to recover it. 00:31:21.304 [2024-06-10 11:38:18.315795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.315803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.316013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.316021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.316221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.316228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.316475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.316482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.316792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.316802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.317123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.317131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.317429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.317436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.317775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.317783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.317957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.317966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.318287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.318294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.318631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.318638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.318953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.318959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.319286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.319293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.319619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.319625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.319814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.319824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.320104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.320111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.320404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.320410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.320725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.320731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.321093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.321100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.321412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.321419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.321741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.321748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.322060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.322067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.322345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.322352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.322625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.322641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.322961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.322968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.323302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.323308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.323641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.323648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.324018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.324025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.324245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.324252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.324635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.324642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.305 qpair failed and we were unable to recover it. 00:31:21.305 [2024-06-10 11:38:18.324890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.305 [2024-06-10 11:38:18.324897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.325222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.325229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.325541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.325549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.325842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.325849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.326150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.326157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.326451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.326458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.326743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.326749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.327107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.327114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.327329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.327336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.327559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.327565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.327893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.327901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.328240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.328247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.328461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.328468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.328728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.328735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.329047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.329056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.329348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.329356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.329699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.329705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.330010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.330023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.330338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.330344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.330681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.330688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.331032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.331046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.331380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.331386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.331613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.331620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.331836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.331843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.332157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.332164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.332479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.332487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.332785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.332792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.333173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.333180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.333508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.333515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.333848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.333856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.334169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.334176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.334483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.334490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.334801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.306 [2024-06-10 11:38:18.334808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.306 qpair failed and we were unable to recover it. 00:31:21.306 [2024-06-10 11:38:18.334879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.334891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.335225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.335231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.335549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.335556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.335903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.335910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.336085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.336092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.336425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.336432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.336656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.336662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.336866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.336873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.337246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.337253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.337588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.337594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.337916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.337923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.338120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.338127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.338441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.338447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.338645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.338651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.338984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.338998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.339202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.339209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.339508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.339515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.339851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.339858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.340173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.340180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.340518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.340524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.340837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.340844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.341152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.341160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.341509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.341516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.341743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.341749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.342079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.342086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.342417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.342431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.342716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.342723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.343088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.343095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.343408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.343415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.343736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.343742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.344092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.344099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.307 [2024-06-10 11:38:18.344435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.307 [2024-06-10 11:38:18.344442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.307 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.344618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.344626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.344954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.344962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.345270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.345277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.345637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.345644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.345948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.345955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.346248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.346255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.346593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.346600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.346938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.346945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.347263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.347271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.347600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.347606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.347781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.347788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.347936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.347943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.348268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.348274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.348556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.348563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.348868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.348875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.349159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.349166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.349492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.349499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.349830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.349838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.350186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.350193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.350486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.350494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.350808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.350814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.351139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.351146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.351469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.351476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.351826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.351833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.308 [2024-06-10 11:38:18.352171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.308 [2024-06-10 11:38:18.352177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.308 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.352438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.352445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.352759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.352766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.353111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.353118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.353446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.353453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.353780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.353789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.354087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.354094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.354414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.354421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.354605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.354612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.354829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.354836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.355121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.355127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.355469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.355475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.355830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.355837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.356124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.356131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.356470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.356476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.356815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.356825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.357081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.357088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.357383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.357390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.357703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.357710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.358008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.358015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.358241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.358247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.358585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.358591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.358822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.358829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.359154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.359160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.359467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.359480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.359814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.359822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.360142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.360149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.360414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.360421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.360733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.360740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.361050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.361058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.361242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.361249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.361495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.309 [2024-06-10 11:38:18.361501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.309 qpair failed and we were unable to recover it. 00:31:21.309 [2024-06-10 11:38:18.361729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.361736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.362035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.362042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.362389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.362396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.362736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.362742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.362980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.362987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.363299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.363305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.363624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.363630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.363892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.363899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.364201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.364207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.364482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.364489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.364804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.364811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.365038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.365045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.365225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.365232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.365445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.365454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.365788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.365796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.366108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.366115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.366416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.366424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.366758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.366765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.366997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.367005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.367319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.367327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.367629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.367637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.367968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.367975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.368311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.368317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.368637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.368644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.369031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.369038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.369332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.369339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.369656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.369662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.370000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.370007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.370325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.370332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.370664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.370670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.371012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.371020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.310 qpair failed and we were unable to recover it. 00:31:21.310 [2024-06-10 11:38:18.371342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.310 [2024-06-10 11:38:18.371349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.371650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.371658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.371981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.371988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.372325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.372331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.372673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.372680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.372996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.373002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.373342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.373349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.373681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.373688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.374034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.374041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.374331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.374339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.374671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.374678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.375058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.375064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.375219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.375225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.375403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.375411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.375726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.375733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.376029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.376037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.376353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.376359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.376578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.376585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.376888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.376894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.377258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.377264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.377563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.377570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.377809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.377816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.378166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.378173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.378416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.378423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.378762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.378769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.379137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.379144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.379473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.379480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.379812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.379819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.380180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.380187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.380495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.380502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.380839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.311 [2024-06-10 11:38:18.380846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.311 qpair failed and we were unable to recover it. 00:31:21.311 [2024-06-10 11:38:18.381160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.381167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.381508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.381515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.381700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.381708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.382083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.382090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.382415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.382421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.382741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.382748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.383088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.383095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.383409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.383415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.383796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.383803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.384094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.384102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.384423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.384430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.384596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.384603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.384795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.384809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.385107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.385114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.385410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.385417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.385711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.385718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.386026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.386033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.386381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.386387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.386730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.386738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.387075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.387084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.387397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.387404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.387709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.387717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.388019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.388026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.388228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.388235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.388406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.388414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.388718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.388725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.389023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.389037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.389374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.389381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.389691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.389698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.389914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.389921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.390245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.390252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.390458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.390465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.390764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.390772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-10 11:38:18.391077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-10 11:38:18.391084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.391386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.391393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.391690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.391696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.392004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.392012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.392322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.392329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.392633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.392640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.392972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.392979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.393288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.393295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.393595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.393602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.393909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.393916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.394283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.394290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.394608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.394614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.394857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.394864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.395169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.395176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.395516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.395523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.395724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.395731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.395943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.395950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.396166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.396172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.396454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.396461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.396807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.396814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.397143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.397150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.397377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.397384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.397613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.397620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.397942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.397949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.398207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.398213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.398425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.398434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.398685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.398691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-10 11:38:18.398988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-10 11:38:18.398996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.399325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.399332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.399635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.399643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.399974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.399981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.400286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.400294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.400592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.400598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.400895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.400903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.401227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.401233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.401573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.401579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.401897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.401904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.402222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.402229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.402405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.402412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.402747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.402754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.403070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.403076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.403305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.403312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.403657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.403665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.403981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.403988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.404191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.404198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.404518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.404524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.404827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.404834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.405162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.405169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.405401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.405408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.405721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.405728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.406043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.406050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.406362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-10 11:38:18.406369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-10 11:38:18.406707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.406714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.407055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.407069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.407284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.407291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.407603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.407610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.407937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.407944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.408162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.408169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.408515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.408521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.408816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.408825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.409196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.409203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.409487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.409495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.409838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.409845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.410157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.410171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.410486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.410493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.410674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.410683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.410964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.410971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.411215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.411222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.411420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.411427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.411782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.411789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.412132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.412139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.412527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.412534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.412855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.412862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.413174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.413181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.413482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.413489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.413805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.413811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.414143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.414149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.414461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.414468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.414803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.414810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.415149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.415156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.415487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.415494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.415808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.415815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.416000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.416007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-10 11:38:18.416323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-10 11:38:18.416330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.416635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.416643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.416957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.416964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.417328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.417334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.417650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.417656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.417968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.417976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.418296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.418303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.418641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.418648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.418819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.418829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.419113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.419120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.419342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.419349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.419616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.419622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.419938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.419945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.420251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.420258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.420571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.420577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.420916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.420923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.421238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.421245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.421462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.421470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.421648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.421655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.421901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.421908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.422251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.422257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.422569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.422577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.422818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.422829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.422924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.422931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.423214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.423221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.423532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.423546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.423850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.423857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.423895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.423902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.424228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.424235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-10 11:38:18.424591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-10 11:38:18.424598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.424919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.424926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.425206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.425213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.425541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.425548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.425862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.425869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.426220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.426226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.426540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.426547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.426898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.426906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.427218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.427226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.427414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.427421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.427733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.427740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.428047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.428053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.428306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.428313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.428645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.428652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.428953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.428961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.429289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.429296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.429605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.429612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.429928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.429935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.430237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.430244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.430425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.430432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.430790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.430797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.431100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.431106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.431423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.431437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.431766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.431773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.432090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.432097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.432424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.432431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.432733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.432741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.433045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.433051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-10 11:38:18.433387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-10 11:38:18.433395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.433706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.433713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.434051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.434057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.434385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.434393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.434603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.434610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.434940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.434948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.435253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.435260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.435591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.435597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.435891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.435898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.436225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.436231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.436449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.436456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.436798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.436804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.437103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.437111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.437428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.437435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.437614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.437621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.437932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.437939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.438249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.438256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.438569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.438575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.438946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.438953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.439251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.439258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.439571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.439577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.439785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.439792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.440108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.440115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.440411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.440419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.440737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.440744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.441056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.441064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.441378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.441385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.441729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.441737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.442063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.442070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.442295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-10 11:38:18.442302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-10 11:38:18.442617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.442625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.442905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.442912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.443306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.443313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.443651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.443664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.443979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.443986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.444323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.444329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.444652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.444659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.444990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.444997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.445178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.445186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.445492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.445498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.445876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.445883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.446106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.446113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.446418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.446425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.446761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.446768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.447134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.447141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.447490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.447498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.447811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.447817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.448177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.448184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.448534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.448541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.448869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.448876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.449250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.449258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.449581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.449588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.449927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.449934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-10 11:38:18.450265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-10 11:38:18.450272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.450593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.450599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.450926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.450934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.451280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.451286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.451629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.451636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.451999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.452005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.452329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.452336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.452651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.452657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.452964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.452971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.453286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.453292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.453628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.453634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.453815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.453826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.454102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.454109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.454423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.454430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.454735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.454742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.455068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.455075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.455373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.455380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.455617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.455624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.455804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.455812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.456184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.456192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.456483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.456490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.456787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.456794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.457005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-10 11:38:18.457013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-10 11:38:18.457333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.457341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.457653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.457660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.457979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.457986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.458198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.458205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.458518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.458525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.458857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.458863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.459117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.459124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.459446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.459453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.459680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.459686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.459859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.459868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.460204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.460523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.460530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.460793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.460800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.461113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.461120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.461434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.461440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.461734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.461742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.462072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.462079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.462260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.462267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.462568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.462575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.462956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.462962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.463254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.463261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.463575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.463581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-10 11:38:18.463882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-10 11:38:18.463889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.464159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.464166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.464494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.464501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.464817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.464827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.465059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.465066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.465396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.465403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.465475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.465482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.465755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.465761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.466080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.466087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.466296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.466302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.466521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.466527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.466844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.466850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.467256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.467263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.467531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.467537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.467764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.467772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.468078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.468085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.468388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.468395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.468710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.468716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.468828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.468835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.469095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.469101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.469417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.469424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.469726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.469734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.470041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.470048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.470389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.470396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.470719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.470726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.471001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.471008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.471308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.471314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.471502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.471511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.471846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.471854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.472139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.472145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.472450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.472457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-10 11:38:18.472772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-10 11:38:18.472779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.473075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.473082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.473397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.473403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.473728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.473735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.474078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.474086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.474274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.474280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.474583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.474589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.474783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.474789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.475137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.475144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.475464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.475470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.475671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.475678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.475984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.475991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.476292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.476299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.476617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.476624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.476810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.476816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.477107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.477113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.477438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.477444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.477766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.477772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.477992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.477999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.478336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.478342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.478647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.478654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.478993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.478999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.479302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.479309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.479596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.479602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.479646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.479653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.479945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.479952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.480314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.480320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.480605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.480613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.480798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.480805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.481136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.481142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.481457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.481464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.481801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.481807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-10 11:38:18.482000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-10 11:38:18.482007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.482360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.482366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.482670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.482677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.482966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.482973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.483176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.483184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.483398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.483405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.483729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.483736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.484071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.484078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.484401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.484408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.484722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.484729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.485048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.485055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.485360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.485367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.485703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.485710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.486033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.486039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.486310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.486316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.486493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.486500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.486819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.486828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.487164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.487171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.487396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.487403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.487580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.487587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.487794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.487801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.488133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.488140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.488455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.488461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.488748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.488755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.489150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.489157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.489461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.489469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.489849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.489856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.490165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.490172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.490508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.490515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.490798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.490806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.491128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.491135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.491427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.491435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.491617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.491624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-10 11:38:18.491953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-10 11:38:18.491960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.492252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.492259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.492588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.492594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.492933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.492940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.493287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.493294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.493614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.493621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.493844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.493851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.494046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.494053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.494354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.494361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.494561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.494568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.494887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.494894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.495227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.495236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.495549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.495556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.495924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.495931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.496260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.496267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.496642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.496650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.496962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.496969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.497290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.497297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.497613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.497619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.497946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.497954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.498268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.498275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.498587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.498594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.498909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.498916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.499248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.499255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.499585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.499591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.499900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.499907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.500217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.500231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.500557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.500563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-10 11:38:18.500868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-10 11:38:18.500876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.501191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.501200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.501536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.501544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.501878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.501885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.502202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.502209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.502519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.502525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.502831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.502839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.503043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.503050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.503348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.503355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.503659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.503665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.503965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.503973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.504308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.504315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.504616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.504623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.504883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.504889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.505217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.505224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.505561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.505567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.505811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.505817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.506152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.506159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.506290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.506297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.506575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.506583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.506895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.506902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.507174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.507181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.601 qpair failed and we were unable to recover it. 00:31:21.601 [2024-06-10 11:38:18.507506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.601 [2024-06-10 11:38:18.507513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.507850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.507860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.508178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.508185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.508516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.508522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.508801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.508808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.509049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.509056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.509435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.509441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.509725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.509733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.510060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.510067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.510431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.510438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.510745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.510753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.511068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.511074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.511375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.511382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.511715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.511722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.512048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.512055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.512367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.512374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.512687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.512695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.513058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.513065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.513226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.513233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.513545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.513552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.513866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.513873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.514066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.514074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.514441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.514448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.514767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.514773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.515030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.515037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.515393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.515401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.515713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.515720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.516069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.516378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.516385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.516584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.602 [2024-06-10 11:38:18.516591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.602 qpair failed and we were unable to recover it. 00:31:21.602 [2024-06-10 11:38:18.516916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.516924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.517263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.517271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.517609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.517616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.517952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.517959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.518301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.518308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.518621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.518629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.518839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.518846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.519199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.519206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.519398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.519406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.519573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.519581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.519910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.519918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.520176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.520185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.520527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.520534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.520749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.520757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.521069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.521077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.521391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.521399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.521612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.521620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.521931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.521938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.522281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.522289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.522620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.522628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.523021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.523028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.523346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.523354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.523531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.523538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.523806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.523813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.524057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.524065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.524384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.524391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.524617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.524624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.524944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.603 [2024-06-10 11:38:18.524951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.603 qpair failed and we were unable to recover it. 00:31:21.603 [2024-06-10 11:38:18.525285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.525293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.525610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.525618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.525947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.525955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.526249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.526256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.526570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.526578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.526892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.526900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.527121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.527128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.527292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.527300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.527615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.527623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.527881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.527889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.528201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.528208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.528285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.528292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.528709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.528744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.529223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.529257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.529470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.529481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.529833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.529843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.530187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.530196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.530515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.530525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.530853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.530863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.531176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.531186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.531519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.531528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.531845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.531855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.532042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.532051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.532380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.532390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.532731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.532741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.533048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.533059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.533387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.533397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.533754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.533763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.534016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.534026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.534263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.534273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.534586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.534595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.534898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.604 [2024-06-10 11:38:18.534909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.604 qpair failed and we were unable to recover it. 00:31:21.604 [2024-06-10 11:38:18.535244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.535253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.535525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.535535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.535828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.535837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.536155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.536164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.536468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.536477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.536792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.536802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.537118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.537128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.537502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.537511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.537810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.537819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.538127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.538137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.538447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.538456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.538795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.538804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.539147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.539157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.539464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.539473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.539683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.539692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.539953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.539963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.540311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.540321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.540508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.540518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.540838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.540849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.541142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.541151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.541464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.541473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.541795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.541805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.542153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.542163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.542489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.542498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.542828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.542837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.543031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.543042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.543337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.543347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.543676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.543685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.544002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.544012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.544312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.605 [2024-06-10 11:38:18.544321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.605 qpair failed and we were unable to recover it. 00:31:21.605 [2024-06-10 11:38:18.544646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.544655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.544972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.544982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.545285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.545295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.545632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.545641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.545959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.545969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.546304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.546314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.546640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.546649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.546975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.546985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.547174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.547183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.547520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.547529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.547746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.547755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.547960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.547970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.548298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.548307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.548519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.548529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.548857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.548867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.549155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.549165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.549353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.549363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.549704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.549713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.550007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.550016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.550333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.550342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.550554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.550563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.550788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.550797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.551163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.551173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.551495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.551504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.551805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.551814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.552127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.552136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.552450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.552459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.606 [2024-06-10 11:38:18.552774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.606 [2024-06-10 11:38:18.552783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.606 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.553126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.553138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.553465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.553476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.553810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.553820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.554203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.554212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.554486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.554495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.554828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.554838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.555146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.555156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.555418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.555428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.555770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.555779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.556092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.556101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.556442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.556451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.556768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.556777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.557091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.557101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.557290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.557300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.557607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.557616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.557935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.557944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.558278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.558288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.558647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.558657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.558998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.559007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.559339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.559348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.559656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.607 [2024-06-10 11:38:18.559666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.607 qpair failed and we were unable to recover it. 00:31:21.607 [2024-06-10 11:38:18.559983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.559992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.560321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.560331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.560648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.560658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.560974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.560983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.561205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.561214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.561516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.561526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.561696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.561706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.562048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.562058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.562250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.562259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.562590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.562599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.562982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.562992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.563372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.563381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.563716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.563725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.563952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.563962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.564266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.564275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.564574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.564584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.564892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.564901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.565216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.565225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.565454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.565463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.565803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.565815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.566127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.566136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.566440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.566450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.566785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.566795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.567123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.567133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.567466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.567476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.567827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.567836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.568204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.568213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.568549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.568558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.568900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.569220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.569229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.569443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.569452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.569769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.569779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.608 qpair failed and we were unable to recover it. 00:31:21.608 [2024-06-10 11:38:18.569986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.608 [2024-06-10 11:38:18.569996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.570319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.570328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.570575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.570584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.570924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.570934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.571256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.571266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.571609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.571619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.571943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.571953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.572302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.572311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.572637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.572647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.572963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.572972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.573290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.573299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.573669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.573678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.573984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.573994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.574325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.574335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.574650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.574660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.574972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.574981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.575321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.575331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.575667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.575677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.575995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.576004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.576362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.576370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.576667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.576676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.576959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.576968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.577189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.577198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.577423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.577431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.577633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.577642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.577956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.578202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.578211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.578541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.578552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.578884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.578893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.579194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.579203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.609 [2024-06-10 11:38:18.579518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.609 [2024-06-10 11:38:18.579527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.609 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.579824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.579834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.580147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.580156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.580492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.580500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.580792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.580800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.581084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.581094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.581433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.581441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.581757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.581766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.582076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.582086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.582356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.582365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.582701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.582710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.583026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.583036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.583240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.583248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.583543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.583552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.583891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.583900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.584141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.584150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.584468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.584477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.584701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.584710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.585057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.585066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.585364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.585373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.585578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.585586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.585929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.585938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.586226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.586235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.586529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.586538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.586869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.586879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.587087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.587096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.587428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.587438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.587771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.587781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.588007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.588017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.588329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.610 [2024-06-10 11:38:18.588338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.610 qpair failed and we were unable to recover it. 00:31:21.610 [2024-06-10 11:38:18.588674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.588683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.588904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.588912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.589222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.589230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.589535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.589544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.589878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.589887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.590310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.590318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.590510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.590520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.590828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.590839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.591168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.591176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.591471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.591479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.591781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.591790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.592112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.592121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.592508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.592517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.592857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.592866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.593185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.593194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.593473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.593483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.593793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.593802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.594105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.594115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.594347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.594356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.594672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.594681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.594881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.594890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.595225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.595233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.595559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.595568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.595836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.595846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.596170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.596178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.596477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.596486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.596790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.597142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.597151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.597440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.597449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.597784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.597793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.598107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.598116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.611 [2024-06-10 11:38:18.598417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.611 [2024-06-10 11:38:18.598426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.611 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.598774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.598783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.599086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.599095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.599289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.599298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.599611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.599620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.599922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.599931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.600255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.600264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.600597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.600606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.600920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.600929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.601143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.601151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.601493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.601501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.601683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.601693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.602016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.602025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.602240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.602249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.602557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.602566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.602879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.602888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.603200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.603210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.603503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.603513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.603750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.603758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.604071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.604080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.604369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.604378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.604720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.604730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.605055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.605362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.605371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.605748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.605756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.605929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.605939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.612 [2024-06-10 11:38:18.606151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.612 [2024-06-10 11:38:18.606160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.612 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.606383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.606391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.606713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.606722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.607052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.607061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.607354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.607364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.607678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.607687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.608007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.608016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.608228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.608236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.608540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.608548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.608729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.608739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.608948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.608957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.609271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.609279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.609609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.609618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.609930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.609939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.610273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.610282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.610615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.610623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.610952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.610961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.611254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.611263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.611484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.611493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.611824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.611833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.612164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.612173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.612395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.612405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.612720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.612729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.613032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.613042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.613371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.613380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.613604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.613612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.613969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.613978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.614344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.614352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.614522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.614531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.614898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.614907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.615208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.615219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.615501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.613 [2024-06-10 11:38:18.615510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.613 qpair failed and we were unable to recover it. 00:31:21.613 [2024-06-10 11:38:18.615842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.615855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.616110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.616119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.616447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.616456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.616681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.616690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.617040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.617049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.617225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.617235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.617556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.617564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.617860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.617870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.618171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.618180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.618479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.618488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.618698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.618706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.619038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.619047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.619381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.619389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.619571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.619581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.619897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.619906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.620209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.620218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.620547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.620555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.620852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.620862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.621168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.621177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.621357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.621366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.621675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.621683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.621997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.622006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.622314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.622323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.622663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.622672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.622981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.622990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.623311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.623320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.623649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.623658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.623950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.623960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.624175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.614 [2024-06-10 11:38:18.624184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.614 qpair failed and we were unable to recover it. 00:31:21.614 [2024-06-10 11:38:18.624401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.624409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.624698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.624707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.625013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.625022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.625254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.625263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.625580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.625589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.625881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.625891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.626179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.626188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.626493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.626502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.626800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.626809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.627162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.627173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.627480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.627489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.627819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.627831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.628166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.628174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.628530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.628539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.628846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.628855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.629191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.629200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.629531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.629540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.629858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.629867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.630082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.630091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.630399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.630408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.630711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.630719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.630895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.630904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.631274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.631282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.631576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.631586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.631897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.631906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.632208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.632218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.632515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.632524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.632823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.632832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.633039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.633047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.633349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.633358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.633671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.633679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-10 11:38:18.633983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-10 11:38:18.633993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.634309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.634317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.634635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.634644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.634961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.634970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.635303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.635311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.635649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.635658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.635987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.635996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.636286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.636296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.636631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.636639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.636937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.636947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.637278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.637287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.637621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.637629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.637946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.637955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.638303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.638311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.638594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.638604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.638826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.638835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.639163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.639172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.639486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.639495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.639824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.639835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.640168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.640176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.640479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.640488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.640800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.640809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.640983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.640992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.641318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.641326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.641605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.641614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.641930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.641939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.642239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.642248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.642593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.642602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.642907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.642916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.643215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.643224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.643551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.643560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-10 11:38:18.643859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-10 11:38:18.643869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.644213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.644221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.644440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.644449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.644759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.644768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.645088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.645097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.645433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.645441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.645651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.645659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.645957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.645966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.646333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.646342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.646693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.646701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.647010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.647019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.647312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.647321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.647610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.647619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.647956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.647964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.648269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.648279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.648590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.648599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.648933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.648942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.649151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.649160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.649514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.649523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.649856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.649866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.650180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.650189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.650547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.650556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.650868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.650877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.651208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.651216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.651548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.651556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.651894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.651903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.652216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.652224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.652556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.652567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.652875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.652884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-10 11:38:18.653223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-10 11:38:18.653232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.653557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.653566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.653739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.653748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.654040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.654049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.654381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.654390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.654701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.654710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.654921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.654930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.655247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.655256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.655590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.655598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.655826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.655834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.656141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.656150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.656410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.656418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.656741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.656750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.657064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.657073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.657402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.657411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.657790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.657800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.658108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.658118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.658423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.658433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.658768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.658777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.659089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-10 11:38:18.659099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-10 11:38:18.659319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.659329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.659645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.659654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.659982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.659990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.660205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.660214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.660522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.660531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.660833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.660843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.660993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.661002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.661331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.661340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.661638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.661647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.661960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.661969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.662265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.662274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.662460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.662470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.662787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.662795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.663083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.663091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.663412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.663421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.663735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.663743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.664053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.664063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.664375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.664383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.664665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.664676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.664993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.665002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.665223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.665232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.665546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.665555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.665741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.665750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.666058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.666067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.666390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.666399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.666715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.666724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.667047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.667056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.667361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.667370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.667702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.667711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.668030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-10 11:38:18.668039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-10 11:38:18.668216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.668225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.668553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.668561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.668857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.668867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.669050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.669060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.669381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.669390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.669706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.669715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.670027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.670036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.670344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.670353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.670649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.670658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.670976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.670985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.671312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.671320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.671636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.671645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.671828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.671837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.672130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.672139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.672512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.672520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.672695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.672704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.673024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.673033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.673374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.673383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.673680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.673688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.674038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.674047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.674382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.674391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.674703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.674711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.675013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.675022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.675242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.675252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.675601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.675609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.675895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.675905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.676210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.676219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.676551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.676560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.676773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.676784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.677095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.677104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.677435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.677444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.677756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-10 11:38:18.677765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-10 11:38:18.678081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.678090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.678396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.678406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.678709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.678719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.679090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.679098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.679293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.679302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.679599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.679607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.679905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.679914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.680221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.680230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.680525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.680535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.680848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.680857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.681150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.681159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.681345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.681354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.681629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.681637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.681991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.682000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.682178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.682188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.682524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.682532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.682870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.682879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.683195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.683203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.683533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.683541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.683824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.683833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.684147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.684156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.684447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.684457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.684788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.684797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.685122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.685131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.685424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.685433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.685817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.685833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.686030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.686038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.686372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.686381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.686691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.686699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.686919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-10 11:38:18.686928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-10 11:38:18.687219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.687228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.687543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.687552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.687873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.687882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.688080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.688088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.688393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.688402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.688694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.688703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.689070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.689079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.689379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.689388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.689690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.689698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.690006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.690014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.690230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.690238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.690573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.690582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.690894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.690903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.691212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.691221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.691394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.691403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.691710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.691718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.692098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.692107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.692407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.692416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.692728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.692737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.693094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.693103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.693409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.693418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.693597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.693607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.693925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.693935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.694256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.694265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.694590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.694599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.694918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.694928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.695252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.695261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.695574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.695583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.695807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.695815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.696120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.696128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.696451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-10 11:38:18.696461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-10 11:38:18.696778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.696787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.697122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.697131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.697444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.697455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.697774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.697782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.698139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.698148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.698455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.698463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.698782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.698791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.698970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.698981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.699300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.699309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.699644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.699653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.699838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.699848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.700086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.700094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.700399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.700407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.700735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.700744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.701119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.701129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.701433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.701442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.701762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.701770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.702106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.702115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.702334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.702343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.702685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.702694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.702994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.703004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.703335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.703343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.703643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.703653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.703839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.703849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.704159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.704168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.704485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.704493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.704785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.704794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.705131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.705140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.705473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.705482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.705831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-10 11:38:18.705841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-10 11:38:18.706059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.706068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.706357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.706366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.706744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.706753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.707062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.707071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.707385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.707393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.707605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.707613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.707949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.707957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.708268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.708277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.708595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.708604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.708900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.708910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.709225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.709234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.709531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.709540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.709856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.709867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.710152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.710161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.710478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.710487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.710795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.710804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.711126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.711135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.711468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.711478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.711791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.711800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.712129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.712138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.712470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.712479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.712658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.712668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.712880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.712889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.713203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.713212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.713524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.713532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.713832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.713842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.714164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-10 11:38:18.714173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-10 11:38:18.714544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.714552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.714829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.714838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.715141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.715149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.715441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.715450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.715783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.715791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.716115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.716124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.716452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.716460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.716782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.716790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.717152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.717162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.717487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.717495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.717770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.717778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.718088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.718097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.718436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.718445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.718749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.718759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.719022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.719031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.719364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.719373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.719703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.719713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.720078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.720087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.720391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.720400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.720705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.720715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.721046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.721055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.721396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.721404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.721592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.721601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.721897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.721907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.722230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.722238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.722462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.722473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.722782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.722791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.723087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.723097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.723436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.723444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-10 11:38:18.723782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-10 11:38:18.723790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.724105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.724114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.724491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.724500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.724798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.724807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.725146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.725155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.725481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.725490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.725806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.725815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.726143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.726152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.726486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.726496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.726714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.726723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.727050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.727059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.727393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.727401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.727715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.727725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.728035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.728045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.728359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.728368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.728698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.728708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.729036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.729044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.729340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.729349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.729534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.729543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.729845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.729854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.730090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.730099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.730417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.730426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.730726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.730735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.730863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.730873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.731154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.731162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.731509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.731518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.731831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.731840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.732181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.732189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.732486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.732495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.732830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-10 11:38:18.732839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-10 11:38:18.733142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.733151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.733489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.733498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.733812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.733824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.734155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.734164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.734499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.734507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.734851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.734860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.735184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.735196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.735527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.735536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.735862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.735872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.736204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.736213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.736589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.736597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.736934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.736942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.737254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.737263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.737594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.737603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.737914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.737923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.738270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.738278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.738665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.738674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.738979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.738988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.739341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.739349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.739682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.739690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.740008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.740017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.740323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.740332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.740621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.740630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.740949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.740958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.741310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.741319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.741630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.741639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.741952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.741961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.742254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.742263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.742587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.742596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.742933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.742942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.743268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.743277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.743616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-10 11:38:18.743626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-10 11:38:18.743940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.743949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.744098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.744107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.744402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.744410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.744751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.744760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.745092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.745101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.745430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.745439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.745617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.745626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.745904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.745913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.746214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.746222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.746555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.746564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.746739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.746748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.747083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.747092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.747337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.747345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.747642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.747651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.747974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.747985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.748308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.748317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.748593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.748602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.748932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.748940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.749248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.749257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.749567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.749576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.749912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.749921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.750220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.750228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.750430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.750440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.750750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.750759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.751065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.751074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.751405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.751413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.751592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.751602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.751929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.751938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.752248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.752258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.752592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.752601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.752873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.752882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-10 11:38:18.753201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-10 11:38:18.753524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.753533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.753828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.753838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.754159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.754167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.754474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.754483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.754838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.754847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.755209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.755217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.755400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.755409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.755760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.755769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.755980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.755989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.756286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.756296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.756519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.756528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.756876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.756885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.757192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.757201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.757542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.757551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.757861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.757870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.758192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.758200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.758575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.758583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.758878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.758888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.759210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.759219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.759444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.759453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.759833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.759842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.760174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.760182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.760499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.760510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.760844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.760853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.760947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.760956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.761257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-10 11:38:18.761266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-10 11:38:18.761579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.761587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.761901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.761910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.762204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.762214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.762543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.762552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.762856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.762865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.763191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.763199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.763380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.763390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.763679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.763688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.764023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.764032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.764201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.764211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.764544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.764554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.764890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.764899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.765228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.765237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.765396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.765406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.765736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.765746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.766055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.766064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.766378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.766387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.766668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.766677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.766886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.766895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.767232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.767241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.767561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.767570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.767907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.767917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.768247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.768256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.768573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.768583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.768898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.768908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-10 11:38:18.769101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-10 11:38:18.769111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.769441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.769450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.769757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.769767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.770081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.770090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.770409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.770418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.770644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.770653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.770972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.770982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.771309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.771318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.771532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.771541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.771844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.771854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.772038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.772048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.772268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.772280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.772498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.772507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.772795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.772804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.773131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.773141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.773230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.773239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.773509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.773518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.773740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.773750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.774075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.774085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.774400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.774410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.774745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.774754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.775085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.775094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.775418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.775427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.775737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.775746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.776066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.776075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.776391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.776401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.776734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.776743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.777071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.777080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.777399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.777409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.777632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.777641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.777974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-10 11:38:18.777983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-10 11:38:18.778293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.778302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.778600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.778609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.778832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.778842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.779027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.779038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.779314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.779323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.779651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.779660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.779968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.779978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.780265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.780275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.780494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.780503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.780678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.780687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.780797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.780806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.781106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.781115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.781287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.781298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.781483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.781492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.781836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.781846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.782156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.782166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.782475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.782484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.782825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.782835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.783163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.783173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.783363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.783372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.783701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.783713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.784054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.784064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.784392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.784402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.784737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.784746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.784972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.784982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.785283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.785293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.785644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.785653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.785839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.785848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.786129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.786139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.786484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.786493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.786720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-10 11:38:18.786729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-10 11:38:18.786929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.786939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.787114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.787124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.787454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.787464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.787796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.787805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.787993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.788003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.788365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.788375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.788666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.788676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.789020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.789030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.789418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.789427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.789741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.789751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.790083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.790093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.790393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.790403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.790732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.790742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.790916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.790926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.791208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.791218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.791387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.791396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.791717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.791727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.791910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.791919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.792208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.792217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.792553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.792562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.792846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.792855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.793177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.793186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.793474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.793483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.793805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.793814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.794139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.794148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.794470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.794479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.794792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.794802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.795128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.795138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.795453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.795462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.795803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.795815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-10 11:38:18.796137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-10 11:38:18.796146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.796520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.796530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.796829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.796839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.796911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.796919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.797135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.797144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.797351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.797360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.797699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.797708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.798048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.798057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.798372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.798381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.798696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.798705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.799022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.799031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.799370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.799379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.799568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.799578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.799843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.799852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.800184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.800193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.800534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.800543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.800872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.800882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.801159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.801169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.801498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.801507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.801820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.801833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.802176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.802186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.802487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.802496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.802800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.802809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.803120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.803130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.803448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.803456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.803769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.803778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.804115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.804125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.804471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.804480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.804690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.804700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.804990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.805000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.805325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.805335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.805519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.805528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-10 11:38:18.805858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-10 11:38:18.805868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.806058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.806068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.806382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.806392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.806637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.806646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.806783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.806793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.807122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.807132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.807461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.807471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.807784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.807795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.808103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.808113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.808423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.808432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.808754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.808764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.808956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.808966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.809168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.809177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.809397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.809406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.809736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.809745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.810032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.810042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.810158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.810167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.810484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.810493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.810704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.810713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-10 11:38:18.810901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-10 11:38:18.810912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.911 [2024-06-10 11:38:18.811227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.911 [2024-06-10 11:38:18.811238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.911 qpair failed and we were unable to recover it. 00:31:21.911 [2024-06-10 11:38:18.811412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.911 [2024-06-10 11:38:18.811421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.911 qpair failed and we were unable to recover it. 00:31:21.911 [2024-06-10 11:38:18.811770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.911 [2024-06-10 11:38:18.811780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.911 qpair failed and we were unable to recover it. 00:31:21.911 [2024-06-10 11:38:18.811971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.811981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.812301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.812310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.812628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.812638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.812839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.813176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.813185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.813522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.813531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.813838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.813848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.814167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.814177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.814511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.814521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.814776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.814785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.815123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.815133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.815466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.815476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.815663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.815672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.815929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.815939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.816248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.816257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.816587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.816596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.816906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.816916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.817229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.817239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.817584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.817593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.817873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.817883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.818219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.818229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.818443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.818452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.818634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.818644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.818931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.818940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.819257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.819269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.819455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.819465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.819807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.819816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.820115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.820125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.820456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.820465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.912 [2024-06-10 11:38:18.820743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.912 [2024-06-10 11:38:18.820753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.912 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.820937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.820946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.821270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.821279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.821540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.821549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.821758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.821768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.821963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.821973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.822283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.822293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.822613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.822622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.822907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.822915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.823265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.823273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.823610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.823620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.824016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.824025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.824318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.824327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.824507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.824517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.824820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.824833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.825167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.825176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.825396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.825405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.825684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.825693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.825878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.825888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.826240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.826249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.826544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.826553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.826865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.826875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.827079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.827089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.827419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.827429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.827767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.827776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.828093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.828102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.828280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.828290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.828605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.828615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.828977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.828986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.829266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.829275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.829617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.829626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.829957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.829967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.913 [2024-06-10 11:38:18.830193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.913 [2024-06-10 11:38:18.830202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.913 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.830413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.830421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.830634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.830645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.830982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.830993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.831324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.831333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.831650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.831659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.831998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.832007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.832313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.832322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.832678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.832687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.832863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.832872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.833098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.833107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.833434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.833443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.833775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.833783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.834085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.834095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.834334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.834342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.834645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.834654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.834988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.834997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.835188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.835197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.835414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.835423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.835766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.835774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.836134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.836143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.836450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.836464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.836775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.836784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.837144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.837153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.837446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.837455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.837787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.837795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.838117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.838126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.838429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.838438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.838769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.838777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.839076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.839086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.914 qpair failed and we were unable to recover it. 00:31:21.914 [2024-06-10 11:38:18.839391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.914 [2024-06-10 11:38:18.839400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.839761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.839769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.840078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.840088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.840398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.840407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.840706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.840715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.841026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.841035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.841371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.841380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.841680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.841689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.842016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.842025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.842204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.842212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.842526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.842534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.842828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.842837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.843157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.843166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.843500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.843509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.843791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.843799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.843981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.843991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.844366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.844375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.844709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.844717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.845017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.845026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.845363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.845372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.845543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.845553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.845868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.845877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.846218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.846226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.846546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.846563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.846913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.846922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.847272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.847280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.847620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.847635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.847958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.847968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.848261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.848270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.848584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.848593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.848926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.848936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.849244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.915 [2024-06-10 11:38:18.849253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.915 qpair failed and we were unable to recover it. 00:31:21.915 [2024-06-10 11:38:18.849585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.849594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.849911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.849920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.850257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.850272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.850583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.850593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.850932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.850941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.851284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.851292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.851483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.851495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.851781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.851792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.852123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.852135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.852466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.852475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.852787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.852796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.853110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.853120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.853369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.853378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.853685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.853695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.854030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.854040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.854353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.854362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.854649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.854658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.855036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.855045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.855322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.855331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.855642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.855651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.855984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.855993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.856300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.856643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.856652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.856986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.856995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.857320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.857329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.857646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.857655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.857991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.858000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.858320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.858329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.858660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.858669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.858968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.858977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.859292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.859300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.859670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.916 [2024-06-10 11:38:18.859679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.916 qpair failed and we were unable to recover it. 00:31:21.916 [2024-06-10 11:38:18.859968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.859978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.860166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.860175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.860484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.860493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.860814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.860841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.861097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.861106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.861417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.861426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.861747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.861756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.862092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.862101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.862426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.862435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.862749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.862759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.863091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.863100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.863414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.863423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.863752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.863762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.864079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.864089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.864312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.864322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.864634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.864644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.864992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.865003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.865334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.865343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.865667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.865676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.865966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.865975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.866336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.866659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.866668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.866997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.867005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.867341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.867349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.917 qpair failed and we were unable to recover it. 00:31:21.917 [2024-06-10 11:38:18.867551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.917 [2024-06-10 11:38:18.867559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.867871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.867880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.868228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.868237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.868562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.868570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.868877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.868886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.869170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.869178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.869510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.869519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.869832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.869841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.870032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.870042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.870203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.870212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.870520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.870529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.870849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.870857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.871202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.871211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.871529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.871538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.871816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.871828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.872173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.872181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.872509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.872518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.872832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.872841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.873212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.873221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.873500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.873510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.873810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.873818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.874115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.874124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.874439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.874448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.874737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.874746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.875039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.875048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.875382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.875391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.875690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.875700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.876023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.876031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.876330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.876339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.876654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.876662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.877002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.877011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.877338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.918 [2024-06-10 11:38:18.877347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.918 qpair failed and we were unable to recover it. 00:31:21.918 [2024-06-10 11:38:18.877577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.877587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.877885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.877895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.878111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.878120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.878449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.878458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.878774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.878782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.879173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.879182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.879531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.879540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.879710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.879719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.880084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.880094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.880434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.880444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.880740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.880750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.881065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.881074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.881386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.881394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.881691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.881701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.882003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.882012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.882308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.882317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.882651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.882661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.882979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.882988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.883318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.883326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.883555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.883563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.883794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.883803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.884141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.884150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.884448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.884457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.884641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.884651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.884974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.885287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.885295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.885629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.885637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.885880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.885889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.886159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.886168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.886487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.886495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.886833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.886842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.919 [2024-06-10 11:38:18.887160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.919 [2024-06-10 11:38:18.887169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.919 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.887502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.887510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.887841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.887850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.888191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.888199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.888507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.888515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.888688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.888697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.888926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.888935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.889268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.889276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.889580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.889589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.889924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.889935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.890242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.890251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.890578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.890587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.890868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.890878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.891197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.891206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.891540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.891548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.891818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.891830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.892114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.892123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.892471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.892480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.892803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.892811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.893148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.893157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.893524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.893534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.893716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.893726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.894015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.894026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.894332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.894342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.894664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.894673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.894990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.894999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.895298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.895307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.895639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.895648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.895942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.895952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.896260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.896270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.896493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.896501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.896685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.920 [2024-06-10 11:38:18.896695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.920 qpair failed and we were unable to recover it. 00:31:21.920 [2024-06-10 11:38:18.897000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.897009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.897304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.897314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.897627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.897636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.897812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.897826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.898155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.898164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.898499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.898508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.898858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.898867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.899176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.899185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.899488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.899499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.899834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.899844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.900126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.900135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.900470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.900478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.900786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.900795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.901109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.901119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.901451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.901459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.901791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.901801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.901987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.902268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.902280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.902644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.902653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.902956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.902965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.903284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.903293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.903653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.903662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.903856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.903866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.904184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.904193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.904506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.904516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.904830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.904840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.905145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.905154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.905494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.905503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.905819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.905831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.921 [2024-06-10 11:38:18.906141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.921 [2024-06-10 11:38:18.906150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.921 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.906480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.906488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.906829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.906838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.907057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.907066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.907389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.907397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.907653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.907662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.907955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.907964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.908271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.908281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.908615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.908624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.908832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.908841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.909173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.909182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.909518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.909527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.909842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.909851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.910181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.910189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.910575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.910584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.910917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.910926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.911264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.911273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.911459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.911468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.911811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.911820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.912111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.912120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.912441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.912451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.912834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.912843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.913144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.913152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.913444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.913454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.913767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.913775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.914074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.914084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.914415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.914424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.914758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.914767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.915082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.915093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.915370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.915379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.915701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.915710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.922 qpair failed and we were unable to recover it. 00:31:21.922 [2024-06-10 11:38:18.915940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.922 [2024-06-10 11:38:18.915950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.916152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.916161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.916469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.916478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.916813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.916826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.917121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.917131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.917312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.917321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.917594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.917603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.917935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.917945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.918277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.918286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.918593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.918602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.918941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.918951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.919295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.919304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.919635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.919644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.919978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.919987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.920294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.920303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.920541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.920550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.920871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.920880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.921176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.921186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.921568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.921576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.921876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.921885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.922222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.922231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.922533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.922543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.922860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.922869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.923165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.923174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.923492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.923500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.923858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.923867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.924153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.924162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.924484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.923 [2024-06-10 11:38:18.924493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.923 qpair failed and we were unable to recover it. 00:31:21.923 [2024-06-10 11:38:18.924830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.924840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.925170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.925179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.925509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.925518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.925860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.925869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.926185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.926194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.926378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.926388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.926763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.926772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.927114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.927124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.927447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.927455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.927782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.927794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.927980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.927991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.928315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.928324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.928648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.928657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.928987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.928996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.929287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.929297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.929626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.929635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.929931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.929940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.930224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.930233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.930536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.930546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.930933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.930941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.931239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.931249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.931555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.931564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.931864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.931874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.932202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.932211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.932538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.932548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.932856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.932866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.933122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.933131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.933453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.933463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.933777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.933785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.934154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.934164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.934471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.934486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.924 [2024-06-10 11:38:18.934807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.924 [2024-06-10 11:38:18.934815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.924 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.935114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.935124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.935434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.935443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.935689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.935698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.936044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.936380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.936389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.936736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.936745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.937084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.937093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.937430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.937439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.937750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.937759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.938081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.938090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.938296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.938304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.938597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.938607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.938929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.938938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.939272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.939282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.939572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.939581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.939914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.939923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.940248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.940257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.940584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.940595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.940891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.940901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.941851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.941870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.942137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.942147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.942338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.942348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.942579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.942587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.942925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.942934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.943253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.943263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.943596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.943605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.943925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.944149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.944158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.944364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.944372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.944679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.944688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.945020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.945029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.925 [2024-06-10 11:38:18.945320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.925 [2024-06-10 11:38:18.945329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.925 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.945656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.945665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.946013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.946022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.946252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.946260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.946483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.946493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.946717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.946726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.947052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.947061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.947399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.947408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.947712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.947721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.948129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.948138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.948466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.948476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.948846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.948856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.949187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.949195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.949492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.949502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.949816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.949835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.950156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.950165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.950493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.950501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.950814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.950827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.951132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.951142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.951470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.951479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.951834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.951843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.952104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.952112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.952443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.952452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.952793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.952801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.953026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.953035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.953390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.953399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.953700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.953711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.954005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.954014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.954340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.954349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.954472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.954481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.926 [2024-06-10 11:38:18.954824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.926 [2024-06-10 11:38:18.954833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.926 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.955118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.955127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.955411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.955420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.955738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.955747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.956113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.956122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.956419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.956427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.956758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.956767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.956982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.956991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.957360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.957369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.957677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.957687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.958034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.958044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.958362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.958372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.958695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.958703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.958909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.958918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.959242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.959251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.959419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.959428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.959603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.959612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.959830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.959839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.960049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.960058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.960383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.960392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.960704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.960712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.961033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.961043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.961368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.961376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.961680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.961690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.961867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.961877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.962087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.962096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.962324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.962333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.962652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.962661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.962849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.962859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.963138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.963146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.963484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.963492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.963810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.963818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.964130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.927 [2024-06-10 11:38:18.964139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.927 qpair failed and we were unable to recover it. 00:31:21.927 [2024-06-10 11:38:18.964506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.964514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.964809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.964819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.965232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.965241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.965552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.965563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.965769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.965777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.966081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.966091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.966480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.966489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.966783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.966793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.967103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.967112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.967475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.967484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.967796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.967804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.968125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.968134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.968463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.968472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.968810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.968819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.969046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.969055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.969396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.969405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.969739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.969748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.970057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.970067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.970380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.970389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.970722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.970732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.971034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.971044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.971345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.971355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.971668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.971677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.928 qpair failed and we were unable to recover it. 00:31:21.928 [2024-06-10 11:38:18.972011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.928 [2024-06-10 11:38:18.972021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.972358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.972368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.972701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.972710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.972939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.972949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.973179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.973187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.973489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.973498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.973836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.973846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.974182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.974191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.974529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.974539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.974855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.974864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.975050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.975058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.975422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.975431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.975728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.975737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.976077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.976086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.976412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.976420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.976613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.976622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.977002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.977347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.977356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.977671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.977679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.978000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.978009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.978308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.978319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.978651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.978659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.978865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.978875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.979203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.979212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.979548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.979558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.979868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.979877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.980200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.980209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.980521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.980529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.929 [2024-06-10 11:38:18.980891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.929 [2024-06-10 11:38:18.980900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.929 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.981190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.981199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.981535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.981544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.981842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.981852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.982067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.982076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.982274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.982283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.982612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.982621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.982916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.982925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.983237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.983245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.983540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.983549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.983888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.983896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.984210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.984219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.984548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.984557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.984766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.984776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.984987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.984997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.985169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.985178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.985494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.985504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.985837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.985846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.986222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.986231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.986490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.986498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.986814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.986827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.987150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.987158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.987285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.987293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.987619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.987629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.987950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.987959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.988253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.988263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.988599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.988607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.988799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.988808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.989117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.989126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.989342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.989351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.989684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.930 [2024-06-10 11:38:18.989693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.930 qpair failed and we were unable to recover it. 00:31:21.930 [2024-06-10 11:38:18.989947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.989956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.990170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.990181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.990386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.990395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.990659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.990668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.990980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.990989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.991291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.991301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.991523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.991533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.991715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.991724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.992042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.992051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.992380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.992390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.992725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.992734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.992953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.992962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.993176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.993185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.993406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.993416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.993720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.993729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.994111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.994121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.994428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.994437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.994750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.994760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.995092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.995102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.995473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.995482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.995794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.995804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.996131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.996141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.996235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.996244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.996547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.996556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.996766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.996776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.997092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.997103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.997433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.997443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.997773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.997783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.998102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.998112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.998297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.998306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.998636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.931 [2024-06-10 11:38:18.998646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.931 qpair failed and we were unable to recover it. 00:31:21.931 [2024-06-10 11:38:18.998982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:18.998991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:18.999348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:18.999357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:18.999642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:18.999652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:18.999861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:18.999870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.000213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.000221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.000431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.000439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.000740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.000750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.001047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.001056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.001397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.001406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.001584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.001594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.001910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.001923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.002257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.002267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.002608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.002616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.002913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.002923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.003237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.003246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.003548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.003557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.003861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.003870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.004191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.004200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.004515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.004523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.004846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.004855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.005159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.005167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.005506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.005515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.005835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.005845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.006161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.006170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.006507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.006516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.006844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.006853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.007165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.007173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.007401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.007410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.007756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.007765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-10 11:38:19.008070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-10 11:38:19.008080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.008383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.008391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.008552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.008561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.008889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.008898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.009192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.009202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.009515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.009524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.009829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.009839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.010145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.010154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.010451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.010463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.010776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.010785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.011070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.011079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.011407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.011415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.011734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.011743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.012072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.012081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.012374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.012384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.012717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.012726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.013071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.013081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.013397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.013406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.013739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.013748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.014062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.014072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.014387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.014396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.014699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.014709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.015044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.015054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.015388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.015398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.015730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.015740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.016071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.016081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.016416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.016426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.016726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.016735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.017066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.017076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.017396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.017405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.017584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-10 11:38:19.017594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-10 11:38:19.017959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.017969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.018283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.018292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.018439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.018449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.018728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.018738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.019047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.019057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.019415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.019425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.019642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.019652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.019886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.019895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.020122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.020132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.020346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.020355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.020640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.020650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.020964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.020974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.021281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.021290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.021619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.021629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.021993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.022002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.022317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.022326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.022661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.022671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.022982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.022994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.023331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.023341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.023655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.023664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.023998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.024008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.024214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.024224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.024537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-10 11:38:19.024547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-10 11:38:19.024882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.024892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.025111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.025120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.025277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.025287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.025587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.025597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.025943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.025952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.026190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.026200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.026520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.026530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.026852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.026862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.027171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.027180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.027487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.027496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.027796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.027806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.028128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.028138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.028468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.028478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.028815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.028829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.029185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.029195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.029508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.029517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.029850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.029860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.030087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.030096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.030412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.030421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.030701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.030711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.030902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.030912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.031236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.031246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.031551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.031561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.031835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.031845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.032078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.032087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.032428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.032438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.032673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.032682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.032892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.032901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.033207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.033217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.033547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.033557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.033651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.033661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-10 11:38:19.033884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-10 11:38:19.033894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.034072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.034082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.034279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.034289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.034599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.034611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.034949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.035278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.035287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.035625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.035634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.036058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.036068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.036296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.036306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.036476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.036486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.036815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.036828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.037130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.037139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.037451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.037460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.037776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.037786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.038099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.038109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.038451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.038460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.038773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.038782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.039019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.039028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.039329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.039338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.039704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.039713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.039897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.039907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.040277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.040286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.040489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.040498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.040806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.040815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.041152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.041161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.041476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.041485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.041883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.041892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.041982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.041990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.042278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.042288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.042607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.042616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.042944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.042954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-10 11:38:19.043292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-10 11:38:19.043301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.043637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.043646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.043962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.043971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.044301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.044310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.044493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.044503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.044806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.044815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.045141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.045473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.045482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.045817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.045836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.046149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.046158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.046490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.046499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.046853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.046863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.047205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.047216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.047510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.047519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.047835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.047845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.048150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.048159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.048495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.048504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.048785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.048794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.049007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.049017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.049239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.049248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.049597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.049607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.049994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.050003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.050336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.050346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.050671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.050681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.050885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.050895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.051213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.051222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.051539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.051549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.051872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.051882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.052217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.052226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.052412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.052422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.052707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.052717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.053045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-10 11:38:19.053055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-10 11:38:19.053329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.053338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.053665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.053674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.053990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.054000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.054323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.054332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.054680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.054689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.054874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.054884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.055246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.055256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.055570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.055579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.055911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.055921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.056262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.056271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.056585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.056595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.056902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.056913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.057242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.057251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.057588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.057598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.057687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.057695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.057873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.057883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.058215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.058224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.058411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.058420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.058708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.058717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.058924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.058934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.059226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.059237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.059449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.059459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.059741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.059750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.059931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.059942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.060251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.060260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.060440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.060450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.060829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.060839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.061158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.061167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-10 11:38:19.061486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-10 11:38:19.061495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.061801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.061810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.062129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.062138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.062359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.062368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.062698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.062708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.063019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.063029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.063368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.063377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.063746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.063755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.064074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.064084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.064301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.064310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.064631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.064641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.064956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.064966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.065241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.065251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.065583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.065593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.065779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.065789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.066114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.066124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.066458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.066467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.066683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.066692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.067013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.067023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.067210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.067220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.067434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.067443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.067735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.067745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.068080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.068089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.068410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.068420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.068602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.068611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.068798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.068808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.069034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.069044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.069329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.069338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.069542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.069552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.069889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.069898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.070107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.070115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-10 11:38:19.070336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-10 11:38:19.070345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.070675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.070686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.070987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.070997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.071322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.071330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.071615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.071624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.071957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.071966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.072162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.072171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.072494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.072502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.072842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.072852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.073176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.073185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.073489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.073499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.073813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.073825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.074139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.074147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.074440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.074450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.074787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.074795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.075113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.075122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.075463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.075472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.075804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.075812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.076138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.076147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.076482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.076491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.076813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.076830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.077160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.077169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.077471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.077480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.077799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.077808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.078151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.078160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.078408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.078417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.078722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.078731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-10 11:38:19.079039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-10 11:38:19.079048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.079373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.079382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.079717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.079726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.080072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.080082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.080388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.080398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.080582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.080591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.080816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.080829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.081161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.081169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.081463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.081472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.081805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.081813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.082058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.082066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.082402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.082410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.082710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.082719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.083053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.083062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.083406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.083417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.083730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.083739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.083957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.083966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.084297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.084306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.084486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.084495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.084798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.084807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.085121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.085131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.085430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.085439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.085779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.085788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.086095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.086105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.086375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.086383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.086680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.086690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.086907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.086916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.087219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.087228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.087572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.087581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.087763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.087772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.088104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-10 11:38:19.088113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-10 11:38:19.088399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.088407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.088742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.088750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.089003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.089012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.089347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.089355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.089572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.089581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.089887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.089896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.090082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.090092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.090376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.090385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.090692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.090701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.091035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.091044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.091342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.091352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.091681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.091690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.091985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.091995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.092320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.092329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.092669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.092678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.092981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.092990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.093176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.093185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.093378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.093387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.093752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.093761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.094012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.094021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.094235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.094245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.094570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.094579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.094903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.094912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.095254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.095266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.095454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.095463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.095791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.095799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.096129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.096139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.096436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.096446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.096784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.096793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.097097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.097107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.097420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.097428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.097719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.097728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.098041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-10 11:38:19.098050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-10 11:38:19.098385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.098394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.098705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.098714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.099130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.099139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.099434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.099443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.099752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.099761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.100098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.100108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.100372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.100381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.100705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.100714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.100895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.100906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.101185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.101195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.101532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.101542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.101877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.101886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.102207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.102216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.102522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.102530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.102826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.102835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.103150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.103159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.103438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.103447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.103757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.103766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.104103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.104112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.104509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.104518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.104845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.104855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.105158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.105481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.105490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.105827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.105836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.106136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.106144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.106326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.106335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.106633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.106641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.106978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.106987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.107179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.107188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-10 11:38:19.107514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-10 11:38:19.107523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.107805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.107816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.107949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.107966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.108282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.108291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.108515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.108523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.108839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.108849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.109230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.109239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.109575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.109585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.109902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.109911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.110248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.110256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.110587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.110596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.110903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.110913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.111227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.111236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.111534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.111544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.111875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.111884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.112201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.112210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.112525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.112534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.112869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.112878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.113053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.113063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.113354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.113364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.113669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.113677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.113967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.113977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.114277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.114286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.114590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.114600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.114934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.114943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.115234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.115243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.115453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.115462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.115788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.115797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.116138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.116147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.116474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-10 11:38:19.116483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-10 11:38:19.116766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.116775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.117089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.117099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.117446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.117455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.117835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.117845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.118153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.118162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.118453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.118463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.118827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.118836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.119154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.119163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.119480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.119488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.119802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.119811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.120102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.120112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.120433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.120444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.120748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.120757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.120955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.120965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.121241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-10 11:38:19.121250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-10 11:38:19.121471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.121480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.121699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.121709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.121944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.121953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.122145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.122153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.122491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.122499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.122824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.122833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.123153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.123162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.123487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.123496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.123829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.123839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.124166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.124175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.124421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.124430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.124761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.124770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.125105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.125116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.125303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.125312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.125626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.125635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.125948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.125957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.126257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.126266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.222 [2024-06-10 11:38:19.126591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.222 [2024-06-10 11:38:19.126600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.222 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.126930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.126940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.127259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.127268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.127559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.127568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.127795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.127803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.128120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.128130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.128446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.128455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.128745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.128754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.129060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.129069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.129350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.129359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.129682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.129691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.129986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.129996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.130207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.130215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.130534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.130549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.130875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.130884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.131090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.131098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.131291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.131301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.131629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.131638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.131857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.131865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.132195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.132206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.132535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.132545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.132767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.132776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.133008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.133017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.133351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.133360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.133695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.133703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.134037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.134047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.134272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.134281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.134582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.134591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.134794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.134803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.134986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.134997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.135382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.135391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.135687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.135696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.136021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.136031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.223 qpair failed and we were unable to recover it. 00:31:22.223 [2024-06-10 11:38:19.136337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.223 [2024-06-10 11:38:19.136347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.136663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.136672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.137006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.137017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.137257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.137266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.137547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.137555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.137882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.137891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.138231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.138240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.138540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.138549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.138919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.138928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.139216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.139225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.139532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.139540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.139872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.139881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.140206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.140214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.140449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.140459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.140772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.140780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.140956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.140966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.141293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.141302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.141598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.141608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.141910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.141920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.142235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.142243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.142585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.142593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.142826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.142834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.143177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.143186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.143505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.143522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.143843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.143852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.144032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.144041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.144378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.144389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.144721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.144729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.145160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.145170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.145474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.145484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.145791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.145800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.146130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.224 [2024-06-10 11:38:19.146140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.224 qpair failed and we were unable to recover it. 00:31:22.224 [2024-06-10 11:38:19.146471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.146480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.146726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.146735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.146961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.146971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.147281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.147290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.147628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.147637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.147949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.147959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.148164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.148173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.148493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.148502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.148843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.148853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.149155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.149165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.149342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.149352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.149547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.149556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.149766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.149775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.150103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.150113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.150425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.150434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.150775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.150785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.151114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.151124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.151476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.151485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.151801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.151811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.152160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.152170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.152500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.152509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.152712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.152722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.153004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.153014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.153342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.153351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.153686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.153696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.153997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.154007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.154320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.154329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.154661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.154670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.155005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.155014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.155333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.155343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.155656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.155665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.156001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.156011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.156341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.225 [2024-06-10 11:38:19.156350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.225 qpair failed and we were unable to recover it. 00:31:22.225 [2024-06-10 11:38:19.156663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.156672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.156878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.156889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.157241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.157251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.157574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.157584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.157901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.157911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.158127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.158136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.158458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.158468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.158692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.158701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.159040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.159049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.159367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.159376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.159706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.159716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.160042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.160052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.160265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.160274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.160594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.160603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.160931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.160941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.161266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.161276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.161587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.161597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.161944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.161953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.162284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.162294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.162623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.162632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.162944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.162954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.163323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.163332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.163666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.163675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.163997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.164007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.164321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.164330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.164641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.164651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.164980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.164990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.165285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.165295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.165612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.165622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.165945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.165954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.166270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.166279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.166612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.166621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.226 [2024-06-10 11:38:19.166855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.226 [2024-06-10 11:38:19.166864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.226 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.167166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.167175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.167507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.167516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.167727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.167736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.167960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.167969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.168253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.168262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.168617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.168977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.168986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.169351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.169360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.169678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.169686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.170144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.170153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.170495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.170505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.170820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.170839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.171155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.171164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.171380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.171389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.171715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.171723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.171942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.171951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.172261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.172269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.172614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.172623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.172947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.172956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.173264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.173273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.173590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.173599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.173877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.173886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.174208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.174217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.174593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.174602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.174906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.175170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.227 [2024-06-10 11:38:19.175178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.227 qpair failed and we were unable to recover it. 00:31:22.227 [2024-06-10 11:38:19.175472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.175482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.175801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.175810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.176190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.176199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.176528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.176537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.176856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.176865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.177051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.177061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.177431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.177440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.177771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.177780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.178116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.178126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.178444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.178456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.178772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.178781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.179095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.179106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.179436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.179446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.179741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.179750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.179949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.179960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.180331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.180340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.180558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.180567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.180886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.180895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.181207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.181217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.181567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.181575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.181875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.181916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.182152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.182160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.182458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.182468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.182854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.182864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.183244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.183253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.183470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.183479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.183790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.183800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.184039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.184049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.184279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.184289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.184618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.184629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.184827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.184837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.185117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.185127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.185449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.228 [2024-06-10 11:38:19.185458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.228 qpair failed and we were unable to recover it. 00:31:22.228 [2024-06-10 11:38:19.185636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.185645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.185935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.185944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.186344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.186354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.186684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.186694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.187005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.187015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.187243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.187252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.187550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.187560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.187899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.187908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.188232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.188241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.188553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.188561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.188850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.188860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.189108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.189116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.189451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.189461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.189642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.189650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.189826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.189835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.190155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.190164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.190513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.190524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.190806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.190815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.191164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.191174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.191485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.191494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.191797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.191806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.192186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.192196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.192442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.192450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.192567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.192576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.192914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.192924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.193172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.193180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.193390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.193399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.193795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.193805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.194123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.194132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.194443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.194453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.194648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.194658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.194971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.229 [2024-06-10 11:38:19.194980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.229 qpair failed and we were unable to recover it. 00:31:22.229 [2024-06-10 11:38:19.195293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.195309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.195519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.195527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.195744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.195753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.196084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.196093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.196430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.196439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.196760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.196768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.197138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.197148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.197455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.197463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.197661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.197677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.197917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.197926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.198210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.198218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.198408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.198418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.198635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.198644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.198875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.198884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.199185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.199193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.199572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.199581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.199930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.199939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.200264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.200273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.200588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.200597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.200910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.200919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.201255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.201264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.201611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.201620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.201931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.201940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.202305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.202313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.202670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.202681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.203000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.203010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.203356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.203364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.203649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.203658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.203942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.203951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.204268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.204278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.204617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.204625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.204951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.204961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.230 [2024-06-10 11:38:19.205274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.230 [2024-06-10 11:38:19.205282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.230 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.205496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.205505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.205849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.205858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.206098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.206107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.206399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.206407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.206692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.206700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.207085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.207095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.207409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.207418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.207725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.207733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.208079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.208087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.208395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.208404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.208759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.208767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.209069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.209079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.209402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.209410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.209744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.209752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.210047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.210057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.210267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.210275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.210603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.210612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.210828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.210837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.211146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.211155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.211537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.211545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.211859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.211868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.212204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.212213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.212602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.212611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.212923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.212933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.213268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.213276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.213598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.213607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.213992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.214001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.214345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.214354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.214543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.214552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.214803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.214812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.215001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.215011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.215327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.231 [2024-06-10 11:38:19.215344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.231 qpair failed and we were unable to recover it. 00:31:22.231 [2024-06-10 11:38:19.215666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.215676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.215991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.216000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.216316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.216325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.216661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.216670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.216892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.216902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.217209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.217217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.217477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.217486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.217802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.217811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.218064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.218073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.218300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.218309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.218600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.218609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.218913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.218923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.219216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.219225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.219545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.219553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.219850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.219858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.220183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.220191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.220380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.220389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.220604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.220613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.220998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.221007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.221386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.221395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.221585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.221594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.221905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.221914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.222268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.222277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.222593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.222602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.222794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.222803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.223108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.223117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.223440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.223449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.223772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.223781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.224094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.224103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.224288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.224297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.224525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.224534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.224873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.224883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.232 [2024-06-10 11:38:19.225197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.232 [2024-06-10 11:38:19.225206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.232 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.225549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.225557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.225809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.225818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.226166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.226175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.226467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.226477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.226689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.226698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.227014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.227024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.227347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.227358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.227649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.227658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.228002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.228011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.228373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.228381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.228599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.228607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.228866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.228875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.229173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.229182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.229504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.229513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.229727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.229735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.230066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.230075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.230439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.230447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.230757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.230766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.231115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.231124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.231416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.231425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.231613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.231623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.231815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.231828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.232159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.232167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.232424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.232432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.232772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.232780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.233084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.233094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.233421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.233429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.233 qpair failed and we were unable to recover it. 00:31:22.233 [2024-06-10 11:38:19.233730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.233 [2024-06-10 11:38:19.233739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.234105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.234114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.234427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.234436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.234773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.234782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.234966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.234975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.235161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.235171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.235505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.235514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.235810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.235820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.236159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.236168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.236460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.236469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.236784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.236793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.237076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.237085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.237358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.237367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.237691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.237700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.237921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.237930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.238170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.238179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.238500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.238509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.238848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.238857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.239089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.239097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.239358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.239369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.239684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.239693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.239957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.239967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.240288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.240297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.240537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.240545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.240805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.240813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.241090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.241099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.241415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.241423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.241761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.241770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.242159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.242175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.242350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.242359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.242641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.242650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.242808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.242817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.243064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.243073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.234 [2024-06-10 11:38:19.243385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.234 [2024-06-10 11:38:19.243394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.234 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.243689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.243698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.243931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.243941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.244215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.244224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.244547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.244556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.244854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.244864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.245128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.245136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.245487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.245496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.245834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.245843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.246157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.246166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.246455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.246463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.246675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.246684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.246926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.246935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.247261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.247270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.247491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.247500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.247811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.247820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.248144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.248154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.248477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.248487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.248928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.248937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.249257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.249266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.249449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.249459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.249795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.249805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.250052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.250061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.250362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.250371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.250694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.250703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.251017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.251027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.251335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.251346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.251554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.251562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.251796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.251805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.251998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.252008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.252378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.252387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.252679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.252688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.253057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.253066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.253374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.235 [2024-06-10 11:38:19.253384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.235 qpair failed and we were unable to recover it. 00:31:22.235 [2024-06-10 11:38:19.253720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.253728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.253966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.253975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.254230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.254239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.254550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.254559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.254861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.254870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.255214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.255222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.255557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.255566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.255883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.255892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.256246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.256254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.256548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.256556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.256860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.256869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.257206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.257214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.257513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.257522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.257849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.257858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.258098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.258107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.258421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.258429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.258737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.258746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.259042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.259051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.259260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.259269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.259496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.259506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.259803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.259814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.260129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.260441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.260451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.260763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.260773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.260908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.260917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.261249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.261259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.261499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.261508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.261804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.261814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.262142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.262152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.262487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.262497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.262812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.262826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.263047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.263057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.263385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.236 [2024-06-10 11:38:19.263397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.236 qpair failed and we were unable to recover it. 00:31:22.236 [2024-06-10 11:38:19.263705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.263715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.264035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.264046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.264272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.264283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.264594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.264604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.264906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.264916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.265234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.265245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.265561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.265570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.265877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.265886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.266127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.266137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.266456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.266467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.266787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.266797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.267142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.267152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.267472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.267481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.267702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.267711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.268042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.268053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.268399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.268410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.268739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.268749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.269021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.269032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.269249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.269259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.269568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.269579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.269897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.269908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.270196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.270208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.270523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.270533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.270877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.270893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.271156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.271167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.271358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.271368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.271606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.271616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.271781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.271791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.272124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.272134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.272463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.272474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.272800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.272810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.273152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.273163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.273359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.237 [2024-06-10 11:38:19.273368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.237 qpair failed and we were unable to recover it. 00:31:22.237 [2024-06-10 11:38:19.273656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.273666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.273870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.273880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.274234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.274245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.274561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.274572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.274892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.274902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.275156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.275166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.275492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.275529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.275685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.275696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.276022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.276034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.276350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.276361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.276715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.277127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.277138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.277459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.277470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.277790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.277801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.278075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.278087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.278392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.278404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.278621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.278632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.278906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.278918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.279226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.279236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.279580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.279591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.279924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.279935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.280096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.280108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.280470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.280481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.280668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.280679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.280917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.280928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.281241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.281252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.281586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.281597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.281951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.238 [2024-06-10 11:38:19.281962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.238 qpair failed and we were unable to recover it. 00:31:22.238 [2024-06-10 11:38:19.282269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.282281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.282597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.282608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.282888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.282899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.283243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.283254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.283520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.283531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.283849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.283861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.284143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.284154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.284368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.284380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.284588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.284598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.284834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.284846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.284997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.285008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.285226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.285237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.285554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.285566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.285897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.285908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.286147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.286158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.286349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.286360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.286665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.286676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.286908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.286920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.287249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.287261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.287575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.287586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.287798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.287809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.288162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.288174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.288362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.288373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.288671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.288683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.289042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.289054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.289272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.289283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.289581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.289592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.289875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.289886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.290225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.290236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.290549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.290561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.290903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.290914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.291222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.291233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.291549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.239 [2024-06-10 11:38:19.291560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.239 qpair failed and we were unable to recover it. 00:31:22.239 [2024-06-10 11:38:19.291957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.291969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.292274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.292286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.292620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.292631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.292944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.292956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.293181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.293193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.293497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.293508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.293645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.293657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.293957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.293969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.294296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.294307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.294628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.294639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.294869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.294881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.295258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.295270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.295596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.295607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.295838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.295849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.296214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.296226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.296413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.296424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.296745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.296756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.296881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.296892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.297093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.297105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.297486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.297497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.297688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.297700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.297897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.297910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.298272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.298283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.298592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.298603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.298719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.298730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.299054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.299068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.299296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.299307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.299678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.299689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.299875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.299886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.300216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.300228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.300552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.300563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.300868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.300879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.240 [2024-06-10 11:38:19.301196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.240 [2024-06-10 11:38:19.301207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.240 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.301509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.301519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.301861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.301872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.302231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.302242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.302428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.302439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.302756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.302767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.303018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.303029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.303215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.303227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.303403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.303414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.303746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.303757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.304114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.304126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.304427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.304438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.304717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.304728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.305066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.305077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.305179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.305187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.305300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.305309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.305514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.305524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.305775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.305786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.306099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.306110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.306430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.306440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.306778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.306790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.307091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.307103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.307389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.307400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.307570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.307581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.307786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.307797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.307993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.308004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.308110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.308119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.308439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.308450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.308758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.308769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.308963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.308974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.309318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.309329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.309542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.309553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.309880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.309891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.241 [2024-06-10 11:38:19.310184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.241 [2024-06-10 11:38:19.310196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.241 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.310475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.310486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.310794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.310805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.311142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.311153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.311239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.311248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.311543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.311555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.311862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.311872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.312218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.312230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.312563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.312573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.312801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.312811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.313127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.313138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.313428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.313438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.313746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.313757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.314074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.314085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.314437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.314447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.314784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.314795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.315071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.315082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.315403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.315414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.315720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.315731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.316040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.316050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.316347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.316358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.316578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.316588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.316865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.316876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.317113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.317124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.317317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.317328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.317632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.317642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.317950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.317961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.318256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.318267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.318456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.318466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.318632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.318642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.318916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.318926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.319230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.319240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.319573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.319583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.319893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.242 [2024-06-10 11:38:19.319903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.242 qpair failed and we were unable to recover it. 00:31:22.242 [2024-06-10 11:38:19.320253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.320264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.320572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.320583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.320784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.320794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.320992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.321002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.321317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.321327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.321509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.321519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.321796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.321806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.322131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.322142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.322455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.322464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.322778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.322788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.323117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.323128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.323311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.323321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.323642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.323653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.323968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.323978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.324310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.324322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.324632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.324643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.324967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.324977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.325319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.325329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.325660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.325671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.325859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.325868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.326048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.326058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.326383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.326394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.326726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.326737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.327041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.327053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.327367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.327379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.327692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.327703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.328020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.328030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.243 [2024-06-10 11:38:19.328335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.243 [2024-06-10 11:38:19.328346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.243 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.328655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.328666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.328975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.328986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.329315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.329325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.329639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.329650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.329950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.329961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.330291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.330304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.330631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.330642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.330950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.330961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.331284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.331294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.331612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.331622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.331926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.331938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.332248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.332258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.332443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.332452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.332772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.332782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.332962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.332973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.333288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.333298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.333612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.333623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.333940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.333951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.334292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.334302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.334489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.334500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.334788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.334799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.335153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.335164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.335483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.335495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.335831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.335842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.336173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.336183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.336513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.336523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.336860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.336871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.337221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.337231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.337561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.337571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.337884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.337895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.338222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.338232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.338569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.338579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.338886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.244 [2024-06-10 11:38:19.338897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.244 qpair failed and we were unable to recover it. 00:31:22.244 [2024-06-10 11:38:19.339215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.339227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.339559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.339570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.339890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.339901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.340113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.340124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.340425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.340436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.340729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.340741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.340885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.340896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.341231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.341241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.341559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.341570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.341939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.341950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.342883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.342905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.343223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.343235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.343549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.343562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.344443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.344464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.344789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.344800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.345571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.345590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.345900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.345912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.346574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.346592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.346926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.346939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.347263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.347273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.347584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.347594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.347933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.347943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.348252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.348262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.348588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.348884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.348895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.349219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.349229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.349561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.349571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.349894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.349904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.350254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.350264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.350595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.350606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.350817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.350832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.351178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.351189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.351501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.245 [2024-06-10 11:38:19.351510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.245 qpair failed and we were unable to recover it. 00:31:22.245 [2024-06-10 11:38:19.351815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.351828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.352037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.352047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.352359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.352369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.352679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.352689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.352993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.353004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.353338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.353348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.353662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.353673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.353989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.354000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.354327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.354338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.354645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.354655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.354935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.354945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.355258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.355268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.355572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.355583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.355918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.355929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.356246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.356257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.356573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.356583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.356898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.356910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.357238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.357248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.357401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.357410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.357629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.357650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.357964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.357974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.358307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.358317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.358628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.358639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.358958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.358969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.359296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.359306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.359642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.359653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.359968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.359979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.360287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.360298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.360600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.360611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.360940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.360951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.361268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.361278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.361594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.361605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.361792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.361804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.246 qpair failed and we were unable to recover it. 00:31:22.246 [2024-06-10 11:38:19.362126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.246 [2024-06-10 11:38:19.362137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.362449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.362460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.362772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.362783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.362968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.362980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.363285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.363296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.363608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.363619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.363932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.363943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.364268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.364279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.364459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.364470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.364787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.364797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.365133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.365144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.365466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.365476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.365805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.365815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.366133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.366145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.366481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.366491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.366827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.366839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.367169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.367179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.367972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.367991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.368306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.368317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.368625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.368635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.368928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.368938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.369258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.369268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.369612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.369623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.369992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.370003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.370343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.370354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.370577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.370588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.370928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.370942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.371265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.371275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.371604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.371615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.371940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.371951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.372139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.372149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.372448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.372458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.372767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.372777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.247 [2024-06-10 11:38:19.372942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.247 [2024-06-10 11:38:19.372954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.247 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.373267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.373277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.373356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.373365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.373665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.373675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.374058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.374069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.374399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.374409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.374637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.374647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.375014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.375024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.375362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.375372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.375693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.375703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.376010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.376020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.376339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.376350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.376684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.376694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.376920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.376931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.377263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.377273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.377612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.377623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.377849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.377860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.378157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.378169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.378473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.378483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.378797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.378807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.379122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.379133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.379346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.379356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.379677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.379687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.380022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.380033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.380199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.380210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.380545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.380555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.380870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.380881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.381132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.381143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.381345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.248 [2024-06-10 11:38:19.381354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.248 qpair failed and we were unable to recover it. 00:31:22.248 [2024-06-10 11:38:19.381548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.381558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.381888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.381898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.382237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.382248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.382580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.382591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.382884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.382896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.383202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.383212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.383553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.383562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.383860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.383871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.384177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.384186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.384513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.384523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.384845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.384858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.385192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.385202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.385528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.385538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.385846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.385856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.386073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.386084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.386418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.386428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.386789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.386799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.386952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.386962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.387303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.387313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.387644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.387654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.387811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.387826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.387936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.387946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.388177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.388188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.388491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.388502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.388816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.388832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.389172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.389182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.389512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.389523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.389792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.389801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.390077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.390089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.390424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.390434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.390584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.390594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.390819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.390834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.391195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.391205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.249 qpair failed and we were unable to recover it. 00:31:22.249 [2024-06-10 11:38:19.391516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.249 [2024-06-10 11:38:19.391526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.391818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.391831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.392161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.392171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.392491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.392501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.392811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.392826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.393146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.393156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.393448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.393459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.393796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.393806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.394131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.394141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.394257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.394267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.394459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.394469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.394650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.394663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.394879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.394890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.395098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.395107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.395327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.395338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.395546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.395556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.395876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.395887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.396236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.396246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.396590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.396600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.396815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.396828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.397158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.397169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.397481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.397491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.397816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.397830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.398167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.398178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.398446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.398465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.398776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.398787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.399075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.399087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.399408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.399417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.399727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.399737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.400056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.400067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.400403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.400413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.400632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.400641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.400995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.401006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.250 [2024-06-10 11:38:19.401317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.250 [2024-06-10 11:38:19.401327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.250 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.401614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.401625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.401863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.401873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.402214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.402223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.402559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.402569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.402755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.402766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.403053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.403064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.403403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.403414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.403631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.403640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.403935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.403945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.404043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.404052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Write completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 Read completed with error (sct=0, sc=8) 00:31:22.251 starting I/O failed 00:31:22.251 [2024-06-10 11:38:19.404760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:22.251 [2024-06-10 11:38:19.405319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.405405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f0000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.405763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.405800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f0000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.406117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.406205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f0000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.406419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.406431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.251 [2024-06-10 11:38:19.406741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.251 [2024-06-10 11:38:19.406752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.251 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.406979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.406990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.407305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.407316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.407653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.407663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.407775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.407785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.408100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.408110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.408458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.408468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.408685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.408695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.409126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.409136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.409449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.409460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.409803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.409814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.410119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.410130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.410437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.410447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.410755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.410765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.410993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.411004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.411281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.411291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.411511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.411521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.411644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.411654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.411878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.411889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.412209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.412219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.412400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.412410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.412632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.412642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.412997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.413008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.413347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.413359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.413673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.413684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.413890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.413900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.414155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.414165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.414451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.414462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.414772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.414783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.414925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.414937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.415244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.415254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.415458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.415468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.415783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.252 [2024-06-10 11:38:19.415794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.252 qpair failed and we were unable to recover it. 00:31:22.252 [2024-06-10 11:38:19.416046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.416057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.416400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.416412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.416630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.416641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.417044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.417054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.417364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.417374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.417603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.417614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.417867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.417877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.418136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.418147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.418478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.418489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.418712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.418721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.419033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.419044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.419269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.419279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.419662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.419672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.419853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.419862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.420158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.420169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.420346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.420356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.420542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.420552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.420773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.420783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.421010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.421020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.421335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.421345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.421652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.421663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.421983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.421994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.422213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.422223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.422558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.422568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.422861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.422872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.423112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.423122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.423441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.423451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.423731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.423741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.423982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.253 [2024-06-10 11:38:19.423992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.253 qpair failed and we were unable to recover it. 00:31:22.253 [2024-06-10 11:38:19.424334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.424344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.424679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.424691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.425021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.425031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.425347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.425358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.425681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.425691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.425983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.425995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.426364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.426374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.426594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.426603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.426957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.426968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.427298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.427308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.427628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.427639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.427915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.427925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.428126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.428136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.428451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.428462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.428682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.428691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.429013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.429024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.429365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.429375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.429599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.429609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.429966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.429976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.430074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.430083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.430351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.430363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.430736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.430746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.431073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.431085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.431404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.431414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.254 [2024-06-10 11:38:19.431637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.254 [2024-06-10 11:38:19.431647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.254 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.431944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.431956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.432269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.432280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.432608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.432619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.432869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.432879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.433272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.433281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.433593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.433604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.433836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.433847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.434195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.434205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.434544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.434553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.434876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.434887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.435031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.435041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.435265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.435275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.435614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.435623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.531 [2024-06-10 11:38:19.435991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.531 [2024-06-10 11:38:19.436001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.531 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.436306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.436316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.436631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.436641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.436767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.436779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.437091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.437101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.437419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.437430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.437737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.437747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.438014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.438024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.438338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.438348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.438531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.438541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.438665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.438674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.438857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.438868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.439079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.439089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.439407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.439418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.439732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.439743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.439941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.439952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.440218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.440229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.440539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.440550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.440868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.440878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.441117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.441126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.441334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.441344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.441536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.441546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.441739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.441750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.442117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.442129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.442489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.442500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.442837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.442849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.443035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.443045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.443318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.443327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.443668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.443678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.444041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.444051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.444364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.444375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.444690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.444700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.445059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.445069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.532 [2024-06-10 11:38:19.445396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.532 [2024-06-10 11:38:19.445407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.532 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.445721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.445731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.446051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.446062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.446394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.446404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.446682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.446692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.446926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.446936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.447223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.447234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.447553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.447563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.447879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.447889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.448189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.448201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.448609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.448620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.448861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.448872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.449263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.449273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.449558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.449569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.449892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.449903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.450207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.450218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.450531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.450541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.450900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.450910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.451103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.451113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.451434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.451444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.451830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.451841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.452176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.452186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.452440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.452450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.452839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.452849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.453204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.453214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.453527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.453538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.453757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.453767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.454086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.454097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.454453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.454464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.454789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.454800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.455117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.455128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.455454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.455465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.455782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.455793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.456076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.533 [2024-06-10 11:38:19.456088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.533 qpair failed and we were unable to recover it. 00:31:22.533 [2024-06-10 11:38:19.456404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.456415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.456718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.456729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.456920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.456933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.457268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.457279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.457499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.457510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.457812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.457827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.458139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.458149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.458483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.458494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.458757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.458767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.459077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.459088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.459400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.459411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.459719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.459729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.460003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.460013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.460227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.460237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.460564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.460575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.460881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.460891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.461173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.461185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.461520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.461530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.461849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.461861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.462209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.462219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.462535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.462545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.462893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.462904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.463232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.463242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.463545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.463556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.463871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.463882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.464072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.464082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.464295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.464305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.464612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.464622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.464904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.464914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.465260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.465270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.465625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.465635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.465965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.465975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.466233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.534 [2024-06-10 11:38:19.466244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.534 qpair failed and we were unable to recover it. 00:31:22.534 [2024-06-10 11:38:19.466465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.466475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.466783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.466794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.467117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.467127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.467444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.467454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.467738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.467748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.468028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.468038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.468252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.468261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.468591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.468601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.468900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.468912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.469246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.469256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.469564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.469575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.469884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.469894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.470213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.470224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.470559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.470569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.470657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.470666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.470957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.470976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.471260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.471271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.471588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.471598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.471906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.471916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.472244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.472254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.472592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.472602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.472952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.472962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.473297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.473307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.473640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.473651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.473915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.473924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.474252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.474261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.474573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.474583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.474904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.474914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.475220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.475231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.475558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.475568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.475751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.475761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.476134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.476144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.476434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.535 [2024-06-10 11:38:19.476445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.535 qpair failed and we were unable to recover it. 00:31:22.535 [2024-06-10 11:38:19.476768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.476778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.477091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.477101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.477285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.477294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.477622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.477632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.477920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.477932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.478274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.478284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.478500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.478510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.478687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.478699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.479039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.479050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.479358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.479368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.479675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.479686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.479935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.479945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.480171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.480180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.480406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.480416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.480727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.480737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.480994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.481005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.481309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.481319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.481616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.481627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.481851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.481861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.482166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.482177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.482513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.482523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.482832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.482843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.483140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.483150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.483419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.483430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.483747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.483757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.484073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.484084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.536 [2024-06-10 11:38:19.484440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.536 [2024-06-10 11:38:19.484451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.536 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.484634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.484645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.484989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.485000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.485179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.485189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.485515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.485527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.485837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.485848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.486176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.486186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.486504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.486515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.486842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.486854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.487177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.487187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.487553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.487563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.487876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.487888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.488233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.488243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.488555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.488565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.488788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.488797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.489110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.489121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.489442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.489452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.489780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.489791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.490121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.490131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.490533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.490543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.490726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.490735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.491095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.491105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.491453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.491466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.491778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.491789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.492105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.492116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.492470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.492482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.492789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.492799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.493096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.493107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.493352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.493363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.493736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.493748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.494115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.494126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.494256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.494268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.494469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.494481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.537 [2024-06-10 11:38:19.494777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.537 [2024-06-10 11:38:19.494789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.537 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.494895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.494907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.495226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.495237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.495361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.495372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.495559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.495570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.495891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.495901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.496230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.496241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.496556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.496565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.496878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.496891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.497218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.497228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.497540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.497550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.497857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.497869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.498226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.498236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.498550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.498561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.498873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.498883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.499208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.499218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.499555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.499565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.499786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.499795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.500083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.500094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.500408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.500418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.500645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.500655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.501043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.501054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.501370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.501381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.501592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.501603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.501913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.501924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.502287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.502297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.502600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.502611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.502942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.502953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.503284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.503295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.503519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.503530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.503843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.503854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.504252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.504262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.504445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.504456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.538 [2024-06-10 11:38:19.504774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.538 [2024-06-10 11:38:19.504784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.538 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.504964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.504975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.505250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.505260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.505593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.505602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.505812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.505826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.506118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.506129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.506441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.506451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.506789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.506799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.507034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.507044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.507358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.507369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.507668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.507679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1734337 Killed "${NVMF_APP[@]}" "$@" 00:31:22.539 [2024-06-10 11:38:19.507979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.507991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.508320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.508330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.508463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.508473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:22.539 [2024-06-10 11:38:19.508796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.508806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:22.539 [2024-06-10 11:38:19.509146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.509159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.539 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:22.539 [2024-06-10 11:38:19.509473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.509487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:22.539 [2024-06-10 11:38:19.509797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.509809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.510147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.510158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.510487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.510498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.510713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.510724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.511119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.511129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.511303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.511313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.511606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.511617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.511798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.511809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.512130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.512141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.512403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.512414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.512749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.512759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.513069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.513079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.513339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.513353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.513651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.539 [2024-06-10 11:38:19.513662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.539 qpair failed and we were unable to recover it. 00:31:22.539 [2024-06-10 11:38:19.513873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.513884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.514254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.514265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.514578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.514588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.514899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.514909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.515140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.515150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.515335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.515347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.515675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.515687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.515888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.515899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.516249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.516260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.516473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.516484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.516704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.516715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1735146 00:31:22.540 [2024-06-10 11:38:19.517090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1735146 00:31:22.540 [2024-06-10 11:38:19.517105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.517406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.517418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1735146 ']' 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:22.540 [2024-06-10 11:38:19.517690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.517703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:22.540 [2024-06-10 11:38:19.518038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.518050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.540 [2024-06-10 11:38:19.518385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.518397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:22.540 [2024-06-10 11:38:19.518557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.518568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 11:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:22.540 [2024-06-10 11:38:19.518824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.518838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.519037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.519049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.519362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.519373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.519714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.519726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.519986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.519997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.520229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.520240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.520461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.520472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.520586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.520596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.520827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.520839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.521157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.521168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.521477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.521490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.540 [2024-06-10 11:38:19.521813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.540 [2024-06-10 11:38:19.521829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.540 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.522162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.522174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.522520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.522531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.522862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.522874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.523224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.523236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.523533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.523544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.523843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.523855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.524186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.524196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.524485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.524497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.524830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.524843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.525120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.525131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.525449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.525461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.525707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.525717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.526022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.526033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.526349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.526360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.526596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.526606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.526909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.526921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.527130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.527141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.527523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.527534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.527712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.527725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.527947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.527959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.528305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.528316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.528640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.528652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.528868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.528879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.529212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.529224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.529532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.529543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.529863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.529874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.530065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.530075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.530386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.530397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.530570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.530582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.530919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.530929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.531111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.531121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.531505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.531515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.541 [2024-06-10 11:38:19.531829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.541 [2024-06-10 11:38:19.531840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.541 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.532200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.532210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.532393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.532402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.532701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.532711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.533077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.533088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.533404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.533415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.533736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.533745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.534075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.534086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.534293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.534304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.534484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.534494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.534725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.534735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.535041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.535052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.535366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.535376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.535657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.535667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.535987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.535999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.536372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.536383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.536647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.536657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.537023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.537034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.537370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.537380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.537749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.537760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.538093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.538103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.538320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.538330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.538552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.538563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.538883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.538894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.539237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.539247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.539461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.539472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.539704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.539717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.539965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.539976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.542 qpair failed and we were unable to recover it. 00:31:22.542 [2024-06-10 11:38:19.540207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-06-10 11:38:19.540216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.540442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.540451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.540642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.540652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.540716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.540725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.541107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.541117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.541459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.541469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.541693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.541703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.542013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.542024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.542360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.542370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.542674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.542684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.543003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.543013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.543341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.543351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.543658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.543667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.543866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.543879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.544196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.544206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.544527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.544537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.544764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.544774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.544988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.544998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.545322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.545332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.545659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.545669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.546002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.546013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.546230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.546240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.546565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.546575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.546871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.546883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.547085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.547095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.547416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.547427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.547638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.547648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.547996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.548006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.548348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.548358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.548487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.548497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.548747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.548757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.548940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.548953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.549263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.549274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.543 [2024-06-10 11:38:19.549583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-06-10 11:38:19.549593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.543 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.549771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.549783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.550104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.550116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.550354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.550365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.550726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.550736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.550951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.550965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.551166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.551177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.551499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.551509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.551835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.551847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.552153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.552163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.552491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.552502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.552720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.552730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.553051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.553062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.553255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.553265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.553490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.553500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.553814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.553828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.554168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.554179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.554459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.554469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.554657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.554667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.554984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.554995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.555319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.555329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.555543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.555553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.555838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.555848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.556143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.556154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.556351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.556361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.556634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.556644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.556931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.556941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.557257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.557267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.557613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.557624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.557936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.557946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.558136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.558145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.558320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.558330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.558589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.558599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.544 qpair failed and we were unable to recover it. 00:31:22.544 [2024-06-10 11:38:19.558961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-06-10 11:38:19.558972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.559307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.559317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.559616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.559627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.559913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.559923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.560262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.560273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.560588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.560597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.560719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.560729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.561097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.561107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.561465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.561475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.561609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.561619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.561875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.561885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.562243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.562253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.562562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.562575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.562781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.562791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.563157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.563168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.563253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.563262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.563557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.563567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.563767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.563777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.564148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.564159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.564578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.564589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.564906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.564917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.565252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.565263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.565592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.565602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.565964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.565975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.566230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.566240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.566441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.566452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.566775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.566786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.567118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.567129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.567488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.567500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.567831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.567819] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:31:22.545 [2024-06-10 11:38:19.567843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.567872] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.545 [2024-06-10 11:38:19.568091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.568102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.568430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.568439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.545 [2024-06-10 11:38:19.568576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.545 [2024-06-10 11:38:19.568585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.545 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.568780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.568790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.569091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.569102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.569444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.569455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.569776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.569787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.569923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.569934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.570283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.570294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.570511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.570522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.570841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.570853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.571167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.571179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.571484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.571495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.571815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.571829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.571975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.571986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.572192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.572204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.572534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.572545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.572942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.572954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.573197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.573208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.573527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.573538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.573764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.573774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.574072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.574085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.574298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.574309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.574622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.574633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.574949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.574961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.575167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.575178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.575358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.575369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.575702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.575714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.576050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.576061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.576373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.576384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.576705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.576716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.577005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.577016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.577228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.577240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.577566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.577577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.577907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.577919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.546 qpair failed and we were unable to recover it. 00:31:22.546 [2024-06-10 11:38:19.578228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.546 [2024-06-10 11:38:19.578238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.578575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.578586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.578869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.578880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.579140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.579151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.579453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.579463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.579789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.579800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.580124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.580135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.580464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.580475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.580785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.580796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.581056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.581068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.581393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.581404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.581707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.581718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.582035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.582047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.582346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.582357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.582682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.582692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.582921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.582931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.583241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.583250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.583558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.583568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.583760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.583770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.584062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.584072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.584402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.584412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.584786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.584797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.585037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.585048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.585366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.585376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.585743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.585753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.586002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.586012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.586299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.586311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.586619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.586630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.547 qpair failed and we were unable to recover it. 00:31:22.547 [2024-06-10 11:38:19.586847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.547 [2024-06-10 11:38:19.586857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.587171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.587182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.587496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.587506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.587830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.587841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.587981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.587991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.588223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.588232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.588547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.588557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.588885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.588895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.589223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.589234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.589570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.589580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.589751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.589760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.590148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.590159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.590462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.590473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.590812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.590827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.591021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.591030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.591344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.591355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.591631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.591642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.592001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.592011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.592315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.592326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.592633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.592643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.592989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.592999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.593298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.593308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.593615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.593626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.593943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.593953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.594288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.594298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.594661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.594672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.595005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.595016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.595207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.595399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.595410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.595747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.595757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.596035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.596046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.596270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.596280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.596606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.596617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.548 qpair failed and we were unable to recover it. 00:31:22.548 [2024-06-10 11:38:19.596936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.548 [2024-06-10 11:38:19.596946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.597126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.597136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.597338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.597349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.597692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.597701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.597890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.597900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.598185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.598197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.598522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.598532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.598849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.598859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.599102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.599112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.599338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.599348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.599671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.599682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.599902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.599912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.600260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.600269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.600483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.600493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.600811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.600831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.601167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.601178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.601506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.601516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.601843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.601854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.549 [2024-06-10 11:38:19.602154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.602168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.602384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.602394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.602704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.602716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.603014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.603025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.603352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.603362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.603672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.603681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.603950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.603960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.604234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.604244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.604397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.604406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.604738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.604750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.605066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.605076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.605467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.605477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.605681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.605690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.606015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.606026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.549 [2024-06-10 11:38:19.606259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.549 [2024-06-10 11:38:19.606268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.549 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.606575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.606585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.606894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.606905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.607279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.607289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.607519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.607529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.607731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.607742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.608074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.608084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.608387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.608398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.608731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.608741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.608956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.608967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.609305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.609315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.609628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.609639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.609941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.609951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.610153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.610163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.610439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.610449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.610758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.610768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.611086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.611096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.611304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.611313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.611622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.611632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.611867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.611877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.612188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.612197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.612509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.612519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.612837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.612848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.613039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.613049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.613370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.613380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.613694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.613706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.613982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.613994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.614309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.614319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.614657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.614668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.614969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.614980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.615293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.615303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.615488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.615498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.615697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.615707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.550 [2024-06-10 11:38:19.615947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.550 [2024-06-10 11:38:19.615957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.550 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.616278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.616288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.616465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.616474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.616618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.616628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.616901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.616912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.617211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.617221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.617560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.617569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.617791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.617801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.618116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.618126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.618343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.618352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.618648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.618658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.618933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.618943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.619270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.619280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.619509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.619519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.619793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.619804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.620116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.620126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.620464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.620474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.620686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.620696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.620888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.620898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.621124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.621134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.621335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.621346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.621654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.621664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.621979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.621990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.622329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.622338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.622562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.622571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.622878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.622889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.623219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.623229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.623555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.623566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.623837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.623847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.624204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.624214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.624538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.624548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.624842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.624852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.625072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.625082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.551 [2024-06-10 11:38:19.625390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.551 [2024-06-10 11:38:19.625403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.551 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.625713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.625723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.626038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.626049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.626379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.626389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.626723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.626733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.627007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.627017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.627366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.627561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.627572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.627772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.627783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.628105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.628116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.628395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.628405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.628727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.628737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.629056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.629066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.629375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.629385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.629780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.629791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.629999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.630011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.630353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.630364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.630674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.630684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.631000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.631010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.631255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.631266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.631574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.631585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.631818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.631832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.632193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.632203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.632431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.632441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.632751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.632761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.633117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.633128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.633462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.633472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.633726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.633735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.552 [2024-06-10 11:38:19.633933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.552 [2024-06-10 11:38:19.633945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.552 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.634263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.634274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.634457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.634467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.634734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.634745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.635071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.635083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.635370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.635381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.635694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.635704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.636007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.636026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.636347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.636356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.636703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.636713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.637026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.637037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.637343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.637353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.637565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.637574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.637868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.637878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.638176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.638186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.638507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.638517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.638838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.638849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.639162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.639172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.639482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.639493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.639826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.639837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.640130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.640141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.640453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.640463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.640761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.640772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.640979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.640990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.641324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.641335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.641657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.641668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.641914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.641925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.642256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.642266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.642574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.642586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.642904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.642914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.643257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.643268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.643587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.643598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.643938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.643949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.553 [2024-06-10 11:38:19.644276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.553 [2024-06-10 11:38:19.644286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.553 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.644617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.644626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.644942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.644952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.645284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.645294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.645597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.645609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.645939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.645949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.646273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.646285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.646588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.646597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.646897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.646908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.647220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.647231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.647545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.647555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.647874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.647886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.648259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.648269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.648512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.648522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.648852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.648863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.649193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.649204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.649509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.649520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.649861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.649872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.650033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.650043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.650250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.650260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.650591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.650601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.650926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.650936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.651257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.651267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.651605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.651615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.651956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.651968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.652266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.652277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.652600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.652611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.652900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.652911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.653199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.653209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.653555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.653565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.653790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.653800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.654096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.654107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.654455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.654466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.554 [2024-06-10 11:38:19.654782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.554 [2024-06-10 11:38:19.654793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.554 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.654928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.654940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.655347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.655358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.655573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.655582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.655889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.655900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.656216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.656227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.656550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.656560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.656865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.656876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.657176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.657186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.657492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.657503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.657635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.555 [2024-06-10 11:38:19.657720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.657730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.658048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.658060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.658343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.658353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.658666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.658677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.659022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.659033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.659242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.659252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.659594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.659605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.659786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.659798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.660048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.660059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.660404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.660415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.660710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.660722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.660780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.660790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.661113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.661125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.661446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.661457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.661794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.661806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.662232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.662243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.662554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.662567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.662885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.662897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.663215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.663227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.663516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.663528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.663730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.663740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.664064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.664076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.664404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.664415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.555 qpair failed and we were unable to recover it. 00:31:22.555 [2024-06-10 11:38:19.664753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.555 [2024-06-10 11:38:19.664765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.665078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.665090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.665401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.665412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.665709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.665721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.665955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.665965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.666310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.666321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.666501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.666513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.666716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.666727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.667076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.667088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.667396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.667408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.667710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.667722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.668053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.668065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.668251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.668263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.668594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.668605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.668941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.668951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.669312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.669322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.669522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.669532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.669838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.669849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.670039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.670049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.670369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.670379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.670712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.670723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.671108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.671119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.671464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.671475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.671801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.671812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.672147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.672158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.672473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.672483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.672798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.672809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.673132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.673144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.673503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.673514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.673699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.673709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.674040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.674051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.674235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.674245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.674543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.674554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.556 [2024-06-10 11:38:19.674905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.556 [2024-06-10 11:38:19.674918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.556 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.675249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.675260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.675604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.675614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.675948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.675960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.676271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.676282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.676592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.676603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.676930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.676941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.677251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.677261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.677575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.677586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.677900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.677911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.678245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.678256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.678603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.678613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.678908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.678919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.679234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.679244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.679611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.679623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.679894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.679907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.680088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.680098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.680426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.680437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.680625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.680636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.680865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.680877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.681201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.681212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.681402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.681413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.681718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.681728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.682003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.682014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.682312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.682322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.682635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.682645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.682887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.682898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.683239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.683249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.683565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.557 [2024-06-10 11:38:19.683576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.557 qpair failed and we were unable to recover it. 00:31:22.557 [2024-06-10 11:38:19.683622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.683631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.683977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.683987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.684212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.684223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.685000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.685023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.685341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.685354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.686194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.686214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.686539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.686551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.687201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.687221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.687539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.687551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.688455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.688478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.688808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.688834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.689150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.689163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.689478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.689489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.689803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.689814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.690135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.690146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.690479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.690490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.690808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.690819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.691136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.691470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.691482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.691697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.691708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.692123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.692134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.692453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.692465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.693238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.693258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.693589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.693600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.694458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.694479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.694633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.694645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.694972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.694983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.695316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.695327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.695646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.695656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.695980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.695992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.696300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.696311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.696624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.696634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.697476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.558 [2024-06-10 11:38:19.697495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.558 qpair failed and we were unable to recover it. 00:31:22.558 [2024-06-10 11:38:19.697818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.697834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.698141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.698152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.698458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.698468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.698783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.698794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.699097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.699108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.699409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.699421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.699753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.699764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.700021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.700032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.700340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.700351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.700643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.700653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.700986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.700996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.701333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.701345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.701672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.701683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.702057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.702068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.702402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.702412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.702729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.702739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.703001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.703014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.703353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.703363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.703878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.703895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.704225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.704531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.704541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.704873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.704884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.705226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.705235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.705544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.705555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.705740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.705751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.705964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.705975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.706286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.706296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.706611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.706622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.706932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.706943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.707274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.707286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.707621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.707632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.707945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.707956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.708291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.559 [2024-06-10 11:38:19.708302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.559 qpair failed and we were unable to recover it. 00:31:22.559 [2024-06-10 11:38:19.708633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.708644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.708827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.708838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.709174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.709185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.709506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.709518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.709849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.709860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.710215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.710226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.710542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.710553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.710863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.710877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.711194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.711205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.711538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.711547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.711861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.711873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.712198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.712209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.712541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.712551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.712809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.712819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.713160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.713170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.713482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.713493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.713829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.713839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.714132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.714142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.714425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.714437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.714756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.714765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.715142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.715153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.715450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.715461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.715772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.715783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.716106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.716116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.716457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.716467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.716799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.716811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.717158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.717169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.717553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.717562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.717893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.717905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.718043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.718053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.718384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.718394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.718738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.718749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.719057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.560 [2024-06-10 11:38:19.719067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.560 qpair failed and we were unable to recover it. 00:31:22.560 [2024-06-10 11:38:19.719399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.719409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.719761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.719772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.720162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.720172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.720295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.561 [2024-06-10 11:38:19.720322] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.561 [2024-06-10 11:38:19.720329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.561 [2024-06-10 11:38:19.720335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.561 [2024-06-10 11:38:19.720340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.561 [2024-06-10 11:38:19.720498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.720509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.720414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:31:22.561 [2024-06-10 11:38:19.720557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:31:22.561 [2024-06-10 11:38:19.720684] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:22.561 [2024-06-10 11:38:19.720685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:31:22.561 [2024-06-10 11:38:19.720827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.720837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.721139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.721149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.721453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.721463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.721797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.721807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.722179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.722189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.722509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.722520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.722605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.722616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.722911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.722922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.723137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.723147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.723331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.723342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.723680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.723690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.724028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.724039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.724388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.724399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.724715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.724725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.724946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.724956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.725252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.725262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.725599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.725610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.725929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.725939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.726256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.726266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.726593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.726602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.726827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.726837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.727169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.727179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.727408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.727417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.727740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.727750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.728092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.728103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.561 [2024-06-10 11:38:19.728418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.561 [2024-06-10 11:38:19.728431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.561 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.728746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.728757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.729109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.729120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.729459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.729470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.729700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.729711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.730009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.730020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.730236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.730246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.730570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.730580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.730767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.730777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.731130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.731142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.731441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.731453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.731682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.731693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.731922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.731933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.732309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.732319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.732572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.732583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.732766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.732776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.732861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.732870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.733151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.733162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.733471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.733481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.733829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.733841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.734149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.734160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.734472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.734482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.734799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.734811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.735212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.735225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.735442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.735453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.735662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.735673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.736012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.736024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.736217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.736228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.736623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.736634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.736803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.736814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.737182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.737193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.737382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.737391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.737516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.737526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.737852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.562 [2024-06-10 11:38:19.737864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.562 qpair failed and we were unable to recover it. 00:31:22.562 [2024-06-10 11:38:19.738171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.738182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.738519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.738531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.738755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.738765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.739124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.739134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.739460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.739470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.739674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.739684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.739872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.739885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.740085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.740095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.740405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.740416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-06-10 11:38:19.740602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.563 [2024-06-10 11:38:19.740612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.832 [2024-06-10 11:38:19.740790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.832 [2024-06-10 11:38:19.740801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.832 qpair failed and we were unable to recover it. 00:31:22.832 [2024-06-10 11:38:19.741123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.741134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.741323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.741332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.741671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.741681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.741989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.741999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.742072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.742081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.742438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.742448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.742770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.742782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.742970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.742980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.743191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.743201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.743418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.743429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.743755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.743764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.743971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.743983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.744193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.744203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.744565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.744575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.744883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.744894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.745092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.745101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.745274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.745283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.745525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.745535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.745679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.745689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.745977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.745987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.746326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.746337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.746655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.746666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.746740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.746750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.746950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.746961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.747151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.747162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.747500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.747510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.747865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.747876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.748126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.748138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.748481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.748492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.748834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.833 [2024-06-10 11:38:19.748845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.833 qpair failed and we were unable to recover it. 00:31:22.833 [2024-06-10 11:38:19.749225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.749236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.749557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.749567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.749905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.749915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.750264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.750274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.750461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.750470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.750793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.750806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.751139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.751151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.751372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.751382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.751589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.751598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.751923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.751934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.752241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.752252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.752587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.752597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.752789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.752799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.753036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.753046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.753388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.753398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.753687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.753697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.753888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.753898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.754229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.754239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.754584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.754594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.754935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.754945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.755232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.755242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.755549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.755559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.755897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.755908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.756243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.756254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.756465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.756477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.756849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.756861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.757185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.757195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.757550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.757560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.757885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.757896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.758266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.758275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.834 [2024-06-10 11:38:19.758591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.834 [2024-06-10 11:38:19.758603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.834 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.758799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.758809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.759155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.759165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.759358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.759367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.759699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.759708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.760042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.760054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.760234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.760243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.760408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.760417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.760748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.760758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.760946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.760956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.761288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.761298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.761613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.761623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.761961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.761971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.762277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.762287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.762612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.762623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.763019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.763032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.763342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.763352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.763652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.763661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.763996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.764006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.764190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.764199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.764362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.764371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.764713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.764724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.765060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.765070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.765389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.765400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.765770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.765780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.766094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.766105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.766424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.766435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.835 [2024-06-10 11:38:19.766624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.835 [2024-06-10 11:38:19.766635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.835 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.766933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.766943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.767335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.767346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.767656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.767667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.767897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.767908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.768116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.768126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.768446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.768456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.768777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.768787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.769115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.769125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.769454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.769465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.769803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.769813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.770048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.770059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.770231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.770241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.770424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.770434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.770765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.770775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.770934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.770945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.771288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.771297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.771636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.771646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.771695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.771703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.771985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.771996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.772305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.772315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.772638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.772649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.772970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.772981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.773197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.773206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.773527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.773537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.773803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.773813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.774153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.774162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.774347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.774357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.774547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.774559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.774742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.774753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.775084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.775094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.775283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.775293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.775599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.775610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.836 qpair failed and we were unable to recover it. 00:31:22.836 [2024-06-10 11:38:19.775948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.836 [2024-06-10 11:38:19.775958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.776248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.776259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.776600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.776611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.776923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.776933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.777099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.777108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.777442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.777453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.777640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.777650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.777986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.777997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.778359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.778370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.778720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.778731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.778915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.778926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.779255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.779264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.779580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.779590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.779920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.779931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.780244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.780254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.780438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.780449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.780777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.780788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.781161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.781173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.781427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.781438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.781776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.781787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.782119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.782130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.782324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.782335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.782550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.782561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.782918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.782928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.783008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.783326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.783336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.783653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.783664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.783855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.783866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.784149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.784158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.784346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.784365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.784684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.784693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.785020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.837 [2024-06-10 11:38:19.785031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.837 qpair failed and we were unable to recover it. 00:31:22.837 [2024-06-10 11:38:19.785350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.785360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.785677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.785688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.785873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.785884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.786077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.786089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.786407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.786419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.786765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.786775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.786965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.786975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.787301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.787311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.787628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.787638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.787962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.787973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.788156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.788166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.788353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.788364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.788653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.788663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.788981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.788992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.789170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.789179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.789506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.789695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.789707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.789899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.789909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.790263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.790274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.790458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.790469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.790792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.790802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.791135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.791146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.791494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.791505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.791849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.791859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.792076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.792086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.792277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.792286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.838 [2024-06-10 11:38:19.792611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.838 [2024-06-10 11:38:19.792621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.838 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.792963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.792973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.793289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.793299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.793616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.793627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.793799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.793809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.794203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.794213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.794495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.794506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.794724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.794734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.795057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.795068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.795244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.795255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.795586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.795597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.795916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.795927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.796126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.796135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.796432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.796443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.796792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.796802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.796954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.796965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.797316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.797327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.797676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.797688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.797905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.797915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.798204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.798215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.798402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.798412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.798754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.798764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.799082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.799094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.799412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.799422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.799760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.799771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.800042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.800053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.800239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.800249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.800426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.800437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.800751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.800762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.801077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.801088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.801401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.801412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.801781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.801791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.802109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.839 [2024-06-10 11:38:19.802121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.839 qpair failed and we were unable to recover it. 00:31:22.839 [2024-06-10 11:38:19.802464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.802474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.802719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.802729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.803043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.803053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.803239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.803248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.803561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.803571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.803904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.803915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.804229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.804240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.804435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.804446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.804788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.804799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.804971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.804982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.805331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.805342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.805515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.805526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.805738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.805749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.806130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.806141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.806465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.806476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.806780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.806791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.806960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.806971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.807247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.807257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.807432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.807443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.807492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.807503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.807660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.807670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.807987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.807997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.808317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.808329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.808535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.808546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.808887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.808900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.809233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.809242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.809572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.809583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.809900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.809911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.810230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.810241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.810428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.810437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.810771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.810782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.811088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.811098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.811294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.840 [2024-06-10 11:38:19.811304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.840 qpair failed and we were unable to recover it. 00:31:22.840 [2024-06-10 11:38:19.811659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.811669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.811989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.812000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.812334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.812343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.812690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.812701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.813040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.813050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.813374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.813385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.813711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.813721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.813912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.813923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.814104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.814113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.814312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.814323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.814531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.814542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.814890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.814900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.815084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.815094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.815439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.815449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.815761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.815771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.816106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.816117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.816425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.816435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.816745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.816756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.817074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.817085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.817376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.817387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.817564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.817574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.817906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.817917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.818254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.818264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.818641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.818651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.818952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.818962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.819199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.819208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.819399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.819409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.819748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.819757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.820058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.820070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.820369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.820379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.820423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.820431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.820738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.820749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.841 qpair failed and we were unable to recover it. 00:31:22.841 [2024-06-10 11:38:19.820984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.841 [2024-06-10 11:38:19.820994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.821311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.821322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.821640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.821651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.822023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.822033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.822333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.822344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.822654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.822664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.822986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.822997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.823314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.823325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.823662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.823672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.823862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.823872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.824054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.824063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.824362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.824373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.824712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.824723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.824903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.824914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.825162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.825172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.825369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.825379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.825723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.825735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.825925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.825936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.826274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.826284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.826618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.826628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.826974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.826984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.827163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.827172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.827503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.827514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.827838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.827849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.828043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.828053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.828227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.828239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.828585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.828596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.828919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.828931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.829247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.829258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.829600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.829611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.829872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.829882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.830043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.830053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.830393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.830403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.842 qpair failed and we were unable to recover it. 00:31:22.842 [2024-06-10 11:38:19.830754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.842 [2024-06-10 11:38:19.830764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.831160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.831468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.831479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.831818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.831836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.832159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.832169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.832382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.832391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.832704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.832716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.832946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.832956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.833153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.833163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.833471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.833481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.833806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.833815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.834116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.834127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.834316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.834326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.834633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.834643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.834868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.834878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.835194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.835204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.835388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.835397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.835726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.835736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.836111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.836121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.836308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.836317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.836500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.836510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.836717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.836727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.837024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.837035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.837222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.837231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.837572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.837582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.837760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.837770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.838052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.838064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.838399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.838409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.843 [2024-06-10 11:38:19.838746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.843 [2024-06-10 11:38:19.838757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.843 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.839072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.839084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.839241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.839251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.839480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.839489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.839803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.839813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.840148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.840160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.840339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.840350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.840535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.840545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.840735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.840745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.841100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.841111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.841336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.841347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.841667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.841677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.841857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.841868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.842203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.842213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.842523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.842533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.842875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.842887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.843071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.843081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.843311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.843321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.843497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.843509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.843831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.843841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.844224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.844234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.844556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.844567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.844888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.844899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.845086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.845096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.845398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.845409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.845725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.845735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.845922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.845933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.846264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.846274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.846609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.846619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.846929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.846941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.847126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.847136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.847476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.847486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.847712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.844 [2024-06-10 11:38:19.847722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.844 qpair failed and we were unable to recover it. 00:31:22.844 [2024-06-10 11:38:19.848031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.848042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.848407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.848418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.848604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.848615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.848787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.848798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.849120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.849131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.849447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.849459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.849793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.849803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.849991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.850002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.850359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.850370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.850692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.850703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.851031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.851041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.851268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.851279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.851577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.851587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.851920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.851930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.852265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.852275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.852460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.852470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.852619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.852628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.852807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.852816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.853141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.853151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.853495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.853505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.853701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.853712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.853905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.853915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.854240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.854250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.854534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.854545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.854852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.854862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.855056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.855068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.855357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.855367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.855538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.855547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.855722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.855732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.856073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.856084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.856369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.856380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.856684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.856693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.845 [2024-06-10 11:38:19.857006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.845 [2024-06-10 11:38:19.857017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.845 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.857359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.857369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.857707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.857718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.858108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.858118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.858346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.858355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.858663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.858674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.859002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.859013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.859206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.859216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.859549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.859559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.859714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.859724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.860081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.860092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.860439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.860449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.860631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.860641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.860826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.860837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.861164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.861174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.861520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.861530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.861870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.861881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.862201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.862211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.862555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.862565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.862911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.862922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.863246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.863256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.863476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.863487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.863824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.863835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.864151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.864161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.864345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.864355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.864563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.864574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.864628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.864638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.864930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.846 [2024-06-10 11:38:19.864940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.846 qpair failed and we were unable to recover it. 00:31:22.846 [2024-06-10 11:38:19.865237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.865247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.865590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.865600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.865922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.865932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.866108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.866117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.866408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.866417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.866744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.866930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.866941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.867221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.867230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.867600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.867610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.867798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.867807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.868138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.868149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.868363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.868373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.868696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.868706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.869038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.869048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.869313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.869322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.869501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.869512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.869851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.869862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.870044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.870054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.870385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.870396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.870733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.870744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.870937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.870947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.871254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.871265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.871663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.871674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.871988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.871998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.872308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.872318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.872646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.872657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.872964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.872975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.873311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.873321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.873662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.873672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.874017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.874028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.874208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.874218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.874528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.874539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.847 [2024-06-10 11:38:19.874876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.847 [2024-06-10 11:38:19.874888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.847 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.875199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.875210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.875396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.875407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.875705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.875714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.876051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.876061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.876245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.876254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.876526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.876536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.876875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.876885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.877246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.877255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.877445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.877454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.877737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.877748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.878058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.878068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.878249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.878259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.878596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.878607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.878829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.878841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.879014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.879024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.879361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.879371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.879687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.879697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.879884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.879894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.880064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.880074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.880409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.880419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.880741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.880751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.881088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.881098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.881398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.881409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.881746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.881756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.882075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.882085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.882404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.882416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.882756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.848 [2024-06-10 11:38:19.882765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.848 qpair failed and we were unable to recover it. 00:31:22.848 [2024-06-10 11:38:19.883073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.883085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.883142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.883443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.883453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.883776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.883786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.884063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.884074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.884303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.884313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.884680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.884690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.884874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.884884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.885202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.885212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.885548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.885557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.885738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.885748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.886030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.886040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.886259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.886271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.886610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.886621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.886941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.886951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.887134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.887143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.887200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.887210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.887416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.887425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.887728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.887738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.888051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.888063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.888378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.888388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.888722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.888732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.889072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.889083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.889399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.889409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.889599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.889609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.889764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.889773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.889951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.849 [2024-06-10 11:38:19.889961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.849 qpair failed and we were unable to recover it. 00:31:22.849 [2024-06-10 11:38:19.890281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.890292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.890577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.890587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.890903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.890914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.891099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.891109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.891479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.891489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.891666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.891677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.891879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.891890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.892234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.892245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.892562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.892572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.892879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.892890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.893230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.893240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.893587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.893597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.893934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.893945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.894271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.894282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.894615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.894625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.894891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.894901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.895087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.895097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.895428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.895439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.895779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.895789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.896129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.896140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.896463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.896475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.896791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.896800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.897022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.897032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.897227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.897237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.897391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.897401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.897585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.897597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.897904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.897914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.850 [2024-06-10 11:38:19.898281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.850 [2024-06-10 11:38:19.898292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.850 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.898577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.898587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.898901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.898912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.899250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.899261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.899461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.899471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.899781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.899792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.900098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.900108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.900292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.900301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.900653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.900662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.901005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.901015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.901346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.901357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.901546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.901558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.901740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.901749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.902069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.902080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.902259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.902269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.902454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.902465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.902773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.902783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.903100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.903111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.903426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.903436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.903771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.903781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.904066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.904078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.904393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.904403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.904786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.904796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.905118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.905128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.905355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.905365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.905756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.905767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.906072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.906083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.906425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.906436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.906618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.906629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.906957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.906966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.907011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.907019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.907172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.907181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.851 [2024-06-10 11:38:19.907519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.851 [2024-06-10 11:38:19.907529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.851 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.907863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.907873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.908058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.908068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.908262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.908272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.908323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.908332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.908644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.908655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.908961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.908973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.909298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.909308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.909647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.909658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.909959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.909969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.910310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.910320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.910637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.910648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.910979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.910989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.911357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.911367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.911716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.911726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.911907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.911918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.912105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.912114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.912297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.912307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.912592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.912612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.912783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.912793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.913071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.913081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.913401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.913411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.913703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.913714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.913898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.913909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.914201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.914210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.914577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.914588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.914924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.852 [2024-06-10 11:38:19.914935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.852 qpair failed and we were unable to recover it. 00:31:22.852 [2024-06-10 11:38:19.915256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.915266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.915573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.915584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.915961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.915971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.916265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.916276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.916570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.916579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.916768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.916778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.917124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.917135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.917319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.917329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.917625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.917635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.917948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.917959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.918253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.918263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.918599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.918609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.918906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.918916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.919165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.919175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.919494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.919504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.919840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.919850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.920179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.920189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.920536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.920546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.920862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.920873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.921183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.921194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.921244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.921253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.921552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.921562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.921871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.921881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.922220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.922230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.922570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.922580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.922920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.922931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.923148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.923158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.923559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.923569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.923881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.923893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.924246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.924256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.924448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.924457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.924703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.924713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.925064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.853 [2024-06-10 11:38:19.925074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.853 qpair failed and we were unable to recover it. 00:31:22.853 [2024-06-10 11:38:19.925260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.925271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.925588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.925598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.925786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.925797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.926144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.926155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.926489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.926500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.926825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.926837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.927163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.927173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.927516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.927526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.927884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.927894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.928212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.928222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.928273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.928283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.928573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.928583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.928767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.928777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.929086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.929097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.929280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.929289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.929623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.929633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.929966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.929977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.930282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.930293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.930609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.930619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.930937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.930948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.931261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.931271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.931606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.931616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.931795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.931805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.932098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.932108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.932288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.932297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.932617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.932626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.932951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.932963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.933151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.933161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.933511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.933521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.933864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.933875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.934204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.934214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.934401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.854 [2024-06-10 11:38:19.934411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.854 qpair failed and we were unable to recover it. 00:31:22.854 [2024-06-10 11:38:19.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.934717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.935068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.935078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.935405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.935415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.935464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.935472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.935746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.935756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.936094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.936105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.936292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.936302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.936627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.936637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.936946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.936958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.937155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.937164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.937374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.937384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.937752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.937762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.938082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.938093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.938430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.938441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.938777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.938788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.855 [2024-06-10 11:38:19.939115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.855 [2024-06-10 11:38:19.939125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.855 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.939444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.939455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.939749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.939759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.940068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.940079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.940397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.940408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.940726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.940737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.941049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.941060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.941398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.941409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.941763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.941774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.941968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.941979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.942323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.942333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.942622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.942633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.942817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.942831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.943029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.943039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.943347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.943357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.943545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.943554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.943734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.943745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.944076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.944086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.944276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.944285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.944594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.944606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.944792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.944801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.945092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.945102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.945419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.945429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.945765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.945774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.946136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.946146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.946478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.946488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.946828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.946838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.947191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.947201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.947375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.947385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.947715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.947725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.947770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.947779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.948063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.948073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.948442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.948451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.856 qpair failed and we were unable to recover it. 00:31:22.856 [2024-06-10 11:38:19.948763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.856 [2024-06-10 11:38:19.948773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.948931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.948941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.949110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.949119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.949300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.949311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.949631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.949641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.949818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.949833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.950039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.950049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.950349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.950360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.950675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.950685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.950864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.950874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.951052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.951062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.951355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.951365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.951683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.951693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.952058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.952069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.952384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.952395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.952701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.952711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.953046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.953057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.953378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.953389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.953729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.953740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.954054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.954065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.954385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.954395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.954712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.954722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.954912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.954922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.955101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.955112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.955403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.955413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.955729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.955739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.956042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.956054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.956392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.956402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.956591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.956600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.956907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.956918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.957096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.957106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.957423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.957432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.957749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.957760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.857 [2024-06-10 11:38:19.958087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.857 [2024-06-10 11:38:19.958098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.857 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.958444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.958454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.958669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.958679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.958996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.959006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.959323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.959333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.959669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.959679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.959980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.959991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.960166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.960175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.960358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.960367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.960550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.960560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.960607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.960615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.960924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.960935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.961167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.961177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.961496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.961506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.961695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.961704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.962004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.962014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.962335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.962346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.962696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.962706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.963017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.963027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.963368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.963378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.963707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.963718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.964081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.964092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.964325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.964335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.964662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.964673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.964993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.965005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.965191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.965201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.965506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.965517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.965707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.965717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.966061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.966384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.966394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.966584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.966593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.966940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.966950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.967141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.967151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.858 [2024-06-10 11:38:19.967481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.858 [2024-06-10 11:38:19.967493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.858 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.967682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.967691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.968017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.968028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.968363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.968374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.968661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.968672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.968992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.969003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.969358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.969368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.969706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.969717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.969772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.969781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.970064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.970075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.970413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.970423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.970755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.970766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.971084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.971103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.971427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.971437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.971569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.971579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.971895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.971906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.972089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.972101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.972296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.972308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.972610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.972621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.972808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.972819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.973108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.973119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.973434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.973444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.973660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.973669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.974007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.974018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.974325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.974336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.974519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.974529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.974845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.974857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.975184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.975194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.975379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.975388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.975707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.975717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.976040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.976052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.976237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.976248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.976420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.976429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.859 [2024-06-10 11:38:19.976614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.859 [2024-06-10 11:38:19.976623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.859 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.976942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.976952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.977329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.977339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.977648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.977659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.977918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.977928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.978088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.978098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.978271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.978280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.978480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.978491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.978819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.978834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.979173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.979184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.979525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.979535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.979714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.979724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.980005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.980015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.980352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.980363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.980699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.980710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.980995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.981005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.981321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.981332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.981699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.981709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.981928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.981937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.982247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.982257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.982444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.982454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.982761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.982772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.982972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.982982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.983311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.983321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.983523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.983532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.983730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.983741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.984043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.984054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.984368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.984379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.984704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.860 [2024-06-10 11:38:19.984714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.860 qpair failed and we were unable to recover it. 00:31:22.860 [2024-06-10 11:38:19.985034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.985045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.985348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.985358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.985541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.985550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.985827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.985838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.986144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.986154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.986457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.986468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.986832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.986842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.987182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.987192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.987380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.987390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.987724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.987735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.988070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.988081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.988234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.988244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.988577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.988587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.988764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.988774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.989101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.989112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.989303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.989312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.989585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.989595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.989760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.989769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.990093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.990105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.990453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.990464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.990780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.990791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.991127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.991139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.991449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.991460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.991769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.991780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.991962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.991973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.992294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.992305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.992616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.992626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.992949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.992959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.993317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.993327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.993514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.993524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.993855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.993866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.994040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.994050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.994350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.861 [2024-06-10 11:38:19.994360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.861 qpair failed and we were unable to recover it. 00:31:22.861 [2024-06-10 11:38:19.994724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.994735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.994935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.994946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.995302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.995313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.995650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.995661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.995707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.995716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.996077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.996089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.996355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.996365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.996552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.996562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.996755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.996765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.997078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.997088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.997390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.997402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.997717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.997728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.998077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.998089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.998428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.998439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.998625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.998636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.999056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.999066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.999408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.999418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:19.999759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:19.999769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.000088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.000099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.000419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.000430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.000666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.000677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.000966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.000977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.001175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.001185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.001264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.001274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.001355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.001365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.001907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.001923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.002165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.002175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.002309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.002318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.002557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.002567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.002629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.002640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.002933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.002943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.003307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.862 [2024-06-10 11:38:20.003318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.862 qpair failed and we were unable to recover it. 00:31:22.862 [2024-06-10 11:38:20.003512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.003523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.003867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.003879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.004237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.004247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.004548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.004559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.004753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.004762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.004861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.004870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.005080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.005090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.005315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.005326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.005651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.005661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.005984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.005995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.006262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.006272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.006354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.006363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.006678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.006689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.007898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.007908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.008236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.008246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.008436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.008464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.008521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.008529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.008773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.008783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.009109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.009119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.009447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.009455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.009657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.009665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.009926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.009935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.010182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.010189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.010350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.010358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.010705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.010713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.010879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.010887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.010966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.010973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.011093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.863 [2024-06-10 11:38:20.011100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.863 qpair failed and we were unable to recover it. 00:31:22.863 [2024-06-10 11:38:20.011417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.011425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.011748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.011757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.011875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.011883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.012055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.012064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.012352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.012360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.012574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.012583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.012810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.012819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.013181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.013190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.013388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.013397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.013583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.013592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.013635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.013643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.013806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.013815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.014161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.014170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.014355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.014364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.014536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.014545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.014886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.014895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.015226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.015235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.015420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.015429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.015564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.015574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.015882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.015891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.016199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.016209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.016536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.016544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.016738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.016745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.017116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.017124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.017466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.017474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.017796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.017804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.018110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.018119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.018305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.018315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.018639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.018648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.018980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.018988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.019180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.019188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.019402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.019411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.019586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.019594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.864 qpair failed and we were unable to recover it. 00:31:22.864 [2024-06-10 11:38:20.019908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.864 [2024-06-10 11:38:20.019916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.020107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.020116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.020436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.020444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.020639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.020648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.020993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.021001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.021339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.021348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.021435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.021442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f8000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.021625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135600 is same with the state(5) to be set 00:31:22.865 [2024-06-10 11:38:20.022319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.022353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.022560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.022572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.022779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.022789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.023019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.023052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.023305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.023319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.023660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.023670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.024018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.024053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.024419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.024433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.024651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.024661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.025036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.025047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.025262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.025271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.025339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.025351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.025578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.025589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.025831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.025845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.026071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.026080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.026373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.026384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.026702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.026711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.026874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.026885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.027928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.027938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.028050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.865 [2024-06-10 11:38:20.028059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.865 qpair failed and we were unable to recover it. 00:31:22.865 [2024-06-10 11:38:20.028281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.028292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.028536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.028546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.028816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.028833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.028913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.028925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.029099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.029109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.029308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.029318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.029440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.029449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.029571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.029581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.029816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.029832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.030290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.030301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.030629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.030640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.030878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.030888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.031117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.031127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.031322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.031332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.031632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.031643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.031833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.031843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.032266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.032277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.032477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.032486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.032699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.032709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.032907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.032918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.033151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.033162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.033396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.033406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.033459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.033467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.033810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.033820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.866 qpair failed and we were unable to recover it. 00:31:22.866 [2024-06-10 11:38:20.034023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.866 [2024-06-10 11:38:20.034032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.034427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.034437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.034646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.034655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.035067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.035193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.035395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.035618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.035811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.035994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.036004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.036196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.036206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.036419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.036429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.036731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.036742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.037059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.037072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.037416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.037427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.037820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.037834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.038133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.038143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.038218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.038227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.038529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.038540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.038856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.038866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.038953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.038962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.039097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.039106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.039317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.039327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.039637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.039648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.039847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.039858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.040033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.040239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.040249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.040543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.040553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.040878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.040889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.041090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.041224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.041233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.041578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.041589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.041801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.041812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.867 [2024-06-10 11:38:20.042019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.867 [2024-06-10 11:38:20.042030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.867 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.042414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.042423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.042835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.042845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.043198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.043208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.043398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.043407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.043716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.043727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.044051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.044061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.044402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.044412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.044721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.044731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.044917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.044928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.045137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.045146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.045201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.045212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.045520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.045531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.045868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.045878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.046162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.046172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.046362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.046371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.046576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.046586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.046911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.046922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.047289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:22.868 [2024-06-10 11:38:20.047345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.868 [2024-06-10 11:38:20.047353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:22.868 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.047668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.047679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.047926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.047938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.048282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.048292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.048644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.048655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.048978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.048988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.049180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.049191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.049470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.049481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.049668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.049678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.050042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.050053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.050280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.050290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.050602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.050927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.050937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.051146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.051156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.051498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.051507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.051677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.051687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.051874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.051885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.052156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.052166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.052351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.052360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.052565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.052575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.052890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.052901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.053223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.053234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.053422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.053433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.053646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.053657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.053960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.053971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.054275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.054285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.054611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.054621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.054945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.148 [2024-06-10 11:38:20.054955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.148 qpair failed and we were unable to recover it. 00:31:23.148 [2024-06-10 11:38:20.055162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.055171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.055359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.055370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.055577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.055587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.055899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.055910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.056229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.056241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.056303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.056313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.056602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.056613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.056956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.056966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.057145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.057154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.057343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.057353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.057687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.057696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.057870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.057881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.058137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.058147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.058322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.058331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.058605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.058615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.058837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.058848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.059197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.059207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.059530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.059541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.059607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.059616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.059865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.059875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.060193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.060203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.060519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.060529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.060712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.060722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.061023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.061035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.061327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.061337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.061660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.061670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.061978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.061988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.062307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.062318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.062657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.062668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.062981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.062991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.063324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.063334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.063519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.063530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.063826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.149 [2024-06-10 11:38:20.063837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.149 qpair failed and we were unable to recover it. 00:31:23.149 [2024-06-10 11:38:20.064023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.064033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.064359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.064369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.064552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.064562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.064869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.064880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.064934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.064944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.065247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.065257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.065332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.065340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.065602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.065612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.065823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.065833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.065962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.065971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.066026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.066035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.066243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.066255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.066576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.066586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.066899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.066909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.066967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.066976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.067128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.067138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.067343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.067353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.067572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.067582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.067889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.067900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.067959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.067968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.068308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.068319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.068541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.068552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.068878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.068889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.069227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.069237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.069573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.069583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.069770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.069780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.070088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.070098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.070302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.070311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.070581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.070591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.070832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.070842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.071054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.071065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.071382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.071392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.071589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.150 [2024-06-10 11:38:20.071599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.150 qpair failed and we were unable to recover it. 00:31:23.150 [2024-06-10 11:38:20.071913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.071925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.072271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.072280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.072477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.072487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.072663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.072673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.073004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.073013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.073338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.073349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.073660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.073671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.073991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.074002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.074347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.074357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.074680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.074690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.075029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.075039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.075339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.075350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.075535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.075544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.075712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.075721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.076019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.076029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.076078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.076086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.076350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.076360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.076684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.076693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.076878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.076890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.077223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.077233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.077555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.077566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.077612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.077621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.077842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.077853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.078039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.078049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.078334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.078344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.078679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.078689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.078746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.078754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 [2024-06-10 11:38:20.078880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.151 [2024-06-10 11:38:20.078889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.151 qpair failed and we were unable to recover it. 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Write completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Write completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.151 Read completed with error (sct=0, sc=8) 00:31:23.151 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Write completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 Read completed with error (sct=0, sc=8) 00:31:23.152 starting I/O failed 00:31:23.152 [2024-06-10 11:38:20.079621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:23.152 [2024-06-10 11:38:20.079992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.080038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1137770 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.080393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.080405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.080590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.080600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.080783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.080794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.080851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.080861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.081045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.081055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.081389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.081400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.081586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.081596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.081903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.081914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.082228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.082238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.082553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.082566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.082902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.082912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.083229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.083239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.083426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.083435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.083763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.083773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.084093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.084104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.084290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.084301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.084638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.084648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.084882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.084891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.085115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.085125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.085465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.085475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.085791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.085802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.086138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.086150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.086496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.086506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.086693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.086704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.086984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.086995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.087303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.087314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.087634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.152 [2024-06-10 11:38:20.087645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.152 qpair failed and we were unable to recover it. 00:31:23.152 [2024-06-10 11:38:20.087981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.087992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.088334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.088343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.088530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.088541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.088739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.088750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.088934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.088944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.089145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.089154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.089346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.089357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.089698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.089709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.090050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.090061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.090449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.090459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.090768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.090778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.090965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.090977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.091175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.091185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.091413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.091422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.091722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.091732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.092070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.092080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.092384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.092394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.092708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.092719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.092903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.092914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.093096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.093106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.093432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.093442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.093764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.093775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.094113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.094125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.094450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.094461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.094782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.094793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.094983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.153 [2024-06-10 11:38:20.094994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.153 qpair failed and we were unable to recover it. 00:31:23.153 [2024-06-10 11:38:20.095316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.095325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.095519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.095528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.095840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.095850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.096190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.096200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.096540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.096550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.096735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.096744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.097056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.097067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.097454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.097465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.097773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.097783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.097956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.097968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.098136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.098146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.098333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.098344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.098553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.098564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.098866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.098877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.099215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.099225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.099523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.099533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.099855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.099865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.100184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.100195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.100520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.100531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.100839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.100849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.101216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.101226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.101461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.101471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.101802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.101811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.102002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.102012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.102334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.102345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.102752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.102762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.103079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.103090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.103316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.103326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.103551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.103560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.103870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.103881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.104009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.104019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.104261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.104272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.104362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.154 [2024-06-10 11:38:20.104371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.154 qpair failed and we were unable to recover it. 00:31:23.154 [2024-06-10 11:38:20.104477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.104487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.104669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.104678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.104937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.104949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.105059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.105072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.105295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.105305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.105548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.105558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.105862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.105872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.106950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.106960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.107282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.107292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.107456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.107466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.107728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.107738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.108102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.108112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.108318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.108327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.108730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.108740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.109057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.109069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.109260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.109271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.109597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.109608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.109795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.109806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.110120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.110130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.110419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.110429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.110701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.110711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.111052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.111063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.111283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.111293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.111589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.111600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.111868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.111878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.112071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.112081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.112376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.112387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.112610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.155 [2024-06-10 11:38:20.112620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.155 qpair failed and we were unable to recover it. 00:31:23.155 [2024-06-10 11:38:20.112875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.112885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.113210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.113219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.113399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.113409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.113761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.113771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.114005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.114015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.114204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.114214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.114550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.114560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.114900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.114911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.115085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.115095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.115419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.115429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.115673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.115685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.115880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.115890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.116286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.116297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.116648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.116658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.116987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.116997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.117338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.117349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.117565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.117575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.117748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.117758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.118083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.118094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.118441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.118450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.118505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.118514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.118802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.118812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.119155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.119165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.119457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.119468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.119789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.119800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.120106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.120116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.120305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.120315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.120650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.120661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.120989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.121000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.121315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.121325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.121641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.121652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.121971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.121981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.122323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.122334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.156 qpair failed and we were unable to recover it. 00:31:23.156 [2024-06-10 11:38:20.122651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.156 [2024-06-10 11:38:20.122662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.122840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.122851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.123106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.123116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.123318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.123328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.123514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.123525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.123863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.123873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.124181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.124192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.124480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.124489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.124773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.124784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.124968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.124978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.125307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.125317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.125503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.125514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.125851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.125861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.126245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.126305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.126314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.126572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.126582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.126906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.126916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.127255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.127265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.127454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.127463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.127773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.127784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.128161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.128172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.128222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.128232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.128572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.128583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.128908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.128919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.129109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.129118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.129413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.129424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.129714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.129725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.130026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.130036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.130226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.130235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.130554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.130565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.130745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.130755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.131063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.131074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.131388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.131399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.157 qpair failed and we were unable to recover it. 00:31:23.157 [2024-06-10 11:38:20.131720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.157 [2024-06-10 11:38:20.131731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.132021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.132033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.132222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.132233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.132569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.132579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.132895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.132906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.133198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.133209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.133599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.133610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.133883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.133895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.134211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.134222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.134406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.134417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.134723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.134734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.135006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.135021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.135203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.135214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.135533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.135545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.135869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.135880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.136198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.136208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.136552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.136563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.136881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.136892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.137223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.137233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.137571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.137581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.137810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.137820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.138207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.138217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.138521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.138531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.138865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.138875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.139182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.139193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.139428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.139437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.139619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.139628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.139804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.139814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.158 [2024-06-10 11:38:20.140006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.158 [2024-06-10 11:38:20.140016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.158 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.140343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.140354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.140667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.140677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.141005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.141016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.141201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.141211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.141515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.141526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.141841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.141851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.142016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.142027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.142364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.142375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.142711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.142721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.142909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.142921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.143236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.143246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.143428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.143439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.143617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.143628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.143952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.143962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.144271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.144281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.144464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.144474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.144765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.144776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.144970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.144981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.145301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.145312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.145656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.145666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.145845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.145855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.146033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.146043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.146274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.146286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.146587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.146597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.146904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.146914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.147209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.147220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.147534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.147544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.147875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.147885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.148219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.148229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.148565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.148575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.148895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.148906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.149211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.149221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.159 [2024-06-10 11:38:20.149560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.159 [2024-06-10 11:38:20.149571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.159 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.149820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.149833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.150201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.150211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.150524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.150534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.150727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.150736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.151014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.151024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.151360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.151370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.151699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.151709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.152045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.152056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.152246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.152256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.152591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.152602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.152784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.152795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.153093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.153104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.153419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.153430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.153790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.153800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.153981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.153991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.154299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.154309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.154633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.154644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.154931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.154941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.155254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.155263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.155387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.155397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.155683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.155694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.156033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.156043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.156351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.156361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.156713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.156724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.157039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.157050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.157386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.157396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.157697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.157708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.157892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.157902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.158059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.158069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.158348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.158360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.158554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.158564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.158902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.158914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.160 [2024-06-10 11:38:20.159225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.160 [2024-06-10 11:38:20.159236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.160 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.159657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.159667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.159987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.159998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.160335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.160344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.160522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.160532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.160867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.160878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.161187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.161198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.161514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.161524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.161690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.161699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.162005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.162015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.162323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.162334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.162526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.162538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.162702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.162713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.162905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.162917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.163220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.163230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.163427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.163438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.163770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.163782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.164084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.164095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.164448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.164459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.164800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.164811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.165164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.165175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.165522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.165532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.165854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.165864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.166083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.166093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.166416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.166467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.166476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.166773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.166783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.166968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.166977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.167158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.167169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.167498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.167508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.167827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.167838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.168170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.168180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.168503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.168512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.168836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.161 [2024-06-10 11:38:20.168847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.161 qpair failed and we were unable to recover it. 00:31:23.161 [2024-06-10 11:38:20.169186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.169196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.169482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.169492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.169803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.169813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.170147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.170159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.170345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.170354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.170625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.170635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.170964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.170975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.171295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.171306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.171691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.171700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.172038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.172048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.172376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.172386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.172705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.172715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.173045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.173056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.173374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.173385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.173700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.173711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.173898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.173908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.174242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.174253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.174571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.174581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.174767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.174776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.175109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.175119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.175468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.175479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.175703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.175714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.176030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.176040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.176358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.176369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.176705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.176715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.177045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.177056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.177378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.177388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.177729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.177740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.178071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.178081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.178399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.178411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.178749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.178760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.179048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.179060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.179374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.162 [2024-06-10 11:38:20.179384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.162 qpair failed and we were unable to recover it. 00:31:23.162 [2024-06-10 11:38:20.179695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.179705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.179929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.179939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.180245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.180256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.180580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.180589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.180905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.180916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.181097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.181107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.181288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.181297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.181625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.181636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.181972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.181982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.182313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.182323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.182510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.182521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.182856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.182866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.183218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.183229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.183550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.183561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.183879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.183889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.184079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.184089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.184239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.184249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.184579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.184588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.184908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.184919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.185212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.185222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.185541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.185551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.185869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.185879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.186185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.186195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.186470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.186479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.186802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.186812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.187148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.187158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.187483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.187494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.187811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.187825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.163 qpair failed and we were unable to recover it. 00:31:23.163 [2024-06-10 11:38:20.188148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.163 [2024-06-10 11:38:20.188157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.188295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.188305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.188620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.188630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.188971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.188981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.189193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.189203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.189534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.189544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.189784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.189794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.189983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.189993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.190196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.190206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.190540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.190550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.190872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.190883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.191071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.191080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.191266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.191275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.191514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.191524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.191845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.191856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.192198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.192208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.192453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.192463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.192678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.192688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.192883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.192893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.193134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.193145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.193331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.193341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.193645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.193655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.193993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.194005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.194167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.194177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.194360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.194370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.194677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.194687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.194872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.194882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.195198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.195208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.195504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.195514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.195888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.195899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.196081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.196091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.196374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.196384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.196702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.164 [2024-06-10 11:38:20.196713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.164 qpair failed and we were unable to recover it. 00:31:23.164 [2024-06-10 11:38:20.197018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.197029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.197197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.197206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.197503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.197513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.197864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.197875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.198090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.198101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.198420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.198431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.198623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.198636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.198983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.198994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.199218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.199230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.199574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.199584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.199899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.199909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.200241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.200252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.200554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.200565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.200863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.200874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.201051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.201061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.201397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.201408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.201589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.201601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.201775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.201786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.201971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.201983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.202297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.202308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.202625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.202636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.202949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.202959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.203346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.203357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.203665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.203676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.204023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.204033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.204216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.204226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.204519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.204529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.204721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.204732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.205074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.205085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.205394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.205408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.205750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.205761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.206121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.206131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.206362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.165 [2024-06-10 11:38:20.206372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.165 qpair failed and we were unable to recover it. 00:31:23.165 [2024-06-10 11:38:20.206609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.206619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.206669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.206678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.207006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.207017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.207335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.207345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.207532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.207542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.207884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.207896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.208267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.208278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.208614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.208625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.208805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.208815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.209108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.209119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.209458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.209471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.209796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.209807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.209995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.210005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.210402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.210736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.210746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.211057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.211068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.211405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.211417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.211740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.211751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.211942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.211954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.212304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.212316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.212639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.212651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.212985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.212996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.213306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.213316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.213503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.213513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.213788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.213799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.214109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.214119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.214309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.214318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.214654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.214664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.214974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.214985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.215300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.215311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.215495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.215505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.215799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.215809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.216162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.166 [2024-06-10 11:38:20.216174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.166 qpair failed and we were unable to recover it. 00:31:23.166 [2024-06-10 11:38:20.216495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.216506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.216843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.216854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.217115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.217125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.217440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.217453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.217827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.217838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.218176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.218186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.218487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.218498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.218835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.218846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.219159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.219169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.219493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.219503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.219854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.219865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.220199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.220209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.220434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.220444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.220790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.220800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.221117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.221128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.221443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.221454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.221786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.221797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.221984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.221995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.222187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.222198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.222516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.222527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.222864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.222875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.223059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.223070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.223405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.223417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.223731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.223742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.223933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.223943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.224108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.224118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.224289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.224299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.224601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.224611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.224900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.224910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.225274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.225284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.225609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.225620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.225820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.225840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.226136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.167 [2024-06-10 11:38:20.226146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.167 qpair failed and we were unable to recover it. 00:31:23.167 [2024-06-10 11:38:20.226468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.226479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.226816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.226830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.226999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.227011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.227306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.227316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.227502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.227511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.227828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.227840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.228178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.228189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.228529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.228540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.228725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.228737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.229025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.229036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.229327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.229341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.229396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.229406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.229456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.229466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.229776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.229787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.230128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.230139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.230458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.230469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.230785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.230796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.231101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.231112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.231430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.231441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.231577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.231588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.231882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.231892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.232062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.232071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.232406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.232416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.232718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.232728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.233029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.233040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.233205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.233214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.233541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.233551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.233866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.233878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.234222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.234233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.234543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.234553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.234742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.234752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.235031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.235041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.235097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.235106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-10 11:38:20.235294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-10 11:38:20.235305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.235640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.235650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.235892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.235902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.236098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.236109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.236475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.236485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.236800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.236811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.237145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.237156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.237443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.237455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.237666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.237677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.237865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.237875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.238200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.238210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.238260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.238268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.238557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.238567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.238901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.238912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.239238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.239249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.239434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.239445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.239784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.239795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.239844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.239855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.240149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.240159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.240474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.240484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.240709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.240719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.240989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.241000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.241316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.241327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.241665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.241675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.241854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.241864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.242155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.242165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.242215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.242223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.242546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-10 11:38:20.242557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-10 11:38:20.242705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.242716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.243009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.243019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.243070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.243079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.243373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.243383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.243699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.243710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.243939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.243950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.244297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.244307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.244690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.244700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.245038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.245048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.245387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.245397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.245713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.245723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.246051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.246061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.246398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.246408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.246660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.246670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.246908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.246918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.247209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.247221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.247542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.247552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.247875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.247886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.248192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.248202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.248521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.248532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.248888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.248899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.249253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.249263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.249601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.249612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.249963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.249974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.250318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.250328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.250512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.250521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.250892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.250902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.251093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.251103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.251386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.251396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.251706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.251718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.251780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.251791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.252106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.252117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.252380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-10 11:38:20.252400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-10 11:38:20.252731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.252743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.252934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.252945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.253220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.253230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.253553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.253564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.253902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.253912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.254225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.254235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.254420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.254429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.254728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.254738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.255041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.255051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.255365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.255375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.255678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.255688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.255876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.255886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.256051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.256062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.256165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.256174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.256392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.256401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.256624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.256634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.256830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.256841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.257177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.257187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.257507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.257516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.257694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.257704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.258082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.258093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.258412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.258421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.258608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.258618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.258800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.258809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.258979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.258989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.259266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.259276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.259456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.259466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.259788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.259797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.259855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.259864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.260017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.260027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.260344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.260354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.260578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.260588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.260893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-10 11:38:20.260904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-10 11:38:20.261082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.261093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.261291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.261301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.261638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.261649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.261784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.261796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.262097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.262107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.262467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.262478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.262814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.262828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.263132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.263142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.263476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.263486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.263819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.263832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.264009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.264018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.264313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.264323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.264659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.264669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.264993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.265003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.265298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.265307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.265621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.265632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.265870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.265880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.266186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.266196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.266380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.266390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.266578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.266588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.266901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.266911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.267225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.267235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.267617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.267627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.267981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.267992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.268079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.268087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.268363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.268373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.268559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.268569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.268762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.268772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.269153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.269163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.269482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.269492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.269707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.269717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.269939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.269950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.270253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.270263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-10 11:38:20.270602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-10 11:38:20.270612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.270804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.270815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.271152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.271162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.271498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.271508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.271829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.271840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.272151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.272161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.272225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.272232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.272419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.272430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.272708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.272718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.272912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.272924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.273095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.273107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.273158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.273167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.273476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.273487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.273806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.273816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.274085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.274095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.274291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.274301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.274620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.274630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.274826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.274836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.275167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.275177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.275526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.275536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.275620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.275630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.275939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.275949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.276247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.276257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.276445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.276455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.276780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.276790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.276846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.276857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.277141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.277150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.277485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.277495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.277678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.277689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.277978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.277989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.278325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.278335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.278522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.278532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.278833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.278844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-10 11:38:20.279043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-10 11:38:20.279054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.279237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.279248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.279553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.279563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.279900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.279911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.280280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.280294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.280598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.280609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.280844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.280854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.281045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.281055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.281357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.281367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.281688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.281699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.282083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.282093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.282440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.282450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.282604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.282615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.282895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.282905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.283243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.283254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.283565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.283575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.283900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.283910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.284092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.284104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.284402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.284413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.284461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.284472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.284721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.284732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.285027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.285036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.285349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.285359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.285683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.285693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.286012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.286022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.286342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.286352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.286694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.286705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.287003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.287014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-10 11:38:20.287348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-10 11:38:20.287358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.287658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.287669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.288012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.288023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.288212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.288222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.288409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.288420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.288618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.288628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.288956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.288966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.289273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.289283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.289348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.289356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.289590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.289600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.289886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.289897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.290067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.290077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.290382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.290392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.290568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.290579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.290743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.290753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.290966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.290977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.291028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.291040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.291332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.291342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.291525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.291535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.291790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.291800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.291983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.291993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.292200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.292210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.292559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.292569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.292750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.292760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.293105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.293116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.293309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.293319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.293653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.293663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.293986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.293997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.294337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.294347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.294533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.294543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.294702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.294712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.295110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.295120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.295445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.295454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-10 11:38:20.295775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-10 11:38:20.295785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.295985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.295996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.296183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.296194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.296466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.296477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.296705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.296715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.296894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.296905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.297253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.297263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.297491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.297502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.297807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.297818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.298140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.298151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.298343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.298352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.298793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.298803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.298992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.299002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.299045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.299053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.299350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.299360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.299546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.299556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.299893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.299903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.300085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.300094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.300140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.300150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.300451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.300462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.300652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.300664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.300997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.301007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.301248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.301258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.301575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.301588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.301929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.301939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.302127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.302137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.302311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.302321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.302633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.302642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.302853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.302863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.303072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.303082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.303411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.303421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.303811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.303824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.304127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.304137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-10 11:38:20.304487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-10 11:38:20.304498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.304728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.304739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.304921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.304930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.305236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.305247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.305427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.305437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.305609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.305619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.305938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.305948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.306260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.306270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.306456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.306466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.306752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.306762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.306842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.306851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.307063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.307072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.307393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.307402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.307589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.307600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.307937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.307947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.308155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.308166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.308222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.308232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.308424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.308434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.308734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.308745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.309069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.309080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.309417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.309427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.309759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.309769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.309951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.309962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.310168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.310179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.310471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.310481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.310677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.310686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.311154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.311164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.311451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.311461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.311776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.311785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.312107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.312117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.312426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.312438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.312760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.312771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.313105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.313115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-10 11:38:20.313303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-10 11:38:20.313312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.313586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.313597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.313974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.313984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.314168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.314179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.314484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.314494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.314829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.314838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.315152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.315162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.315476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.315486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.315771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.315781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.316011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.316022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.316339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.316349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.316531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.316541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.316924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.316935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.317279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.317289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.317641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.317652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.317850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.317860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.318159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.318169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.318507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.318517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.318704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.318714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.318995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.319005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.319344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.319354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.319694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.319704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.320039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.320049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.320390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.320401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.320582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.320593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.320783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.320794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.321107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.321117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.321430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.321440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.321774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.321784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.322052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.322062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.322344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.322354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.322669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.322679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.322795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.322805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.323140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-10 11:38:20.323151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-10 11:38:20.323469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.323480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.323816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.323829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.324157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.324167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.324484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.324497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.324787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.324798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.325116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.325127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.325442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.325453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.325702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.325712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.325882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.325893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.326240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.326251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.326651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.326661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.326844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.326858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.327151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.327162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.327496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.327506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.327830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.327840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.328031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.328042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.328219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.328230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.328562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.328572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.328932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.328943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.329132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.329142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.329428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.329439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.329752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.329763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.330107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.330117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.330432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.330442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.330760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.330771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.330955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.179 [2024-06-10 11:38:20.330965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.179 qpair failed and we were unable to recover it. 00:31:23.179 [2024-06-10 11:38:20.331296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.331307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.331462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.331473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.331780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.331789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.332107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.332118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.332454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.332465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.332809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.332820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.332992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.333003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.333310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.333321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.333371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.333380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.333672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.333989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.334000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.334326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.334337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.334690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.334701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.334889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.334899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.335190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.335201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.335522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.335532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.335842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.335853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.336186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.336200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.336539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.336550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.336857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.336869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.337202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.337213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.337565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.337575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.337917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.337928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.338250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.338261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.338305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.338314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.338613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.338624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.338987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.338999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.339353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.339364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.339591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.339602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.339916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.339927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.340168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.340178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.340366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.340376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.180 [2024-06-10 11:38:20.340694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.180 [2024-06-10 11:38:20.340704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.180 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.340889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.340899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.341242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.341252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.341568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.341578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.341895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.341906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.342250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.342260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.342598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.342609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.342929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.342939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.343277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.343289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.343468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.343480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.343665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.343676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.343973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.343985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.344306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.344319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.344505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.344516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.344852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.344862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.345198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.345208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.345525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.345536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.345715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.345726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.345917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.345929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.346208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.346219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.346341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.346350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.346667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.346677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.346995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.347006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.347381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.347391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.347675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.347686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.347901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.347914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.348239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.348249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.348438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.348449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.348766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.348777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.349099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.349110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.349425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.349435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.349619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.349629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.349935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.349946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.181 [2024-06-10 11:38:20.350122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.181 [2024-06-10 11:38:20.350133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.181 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.350430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.350441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.350688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.350699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.351002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.351012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.351340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.351351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.351697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.351707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.351900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.351911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.352187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.352198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.352535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.352546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.352739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.352750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.353062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.353072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.353258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.353269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.353596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.353607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.182 [2024-06-10 11:38:20.353956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.182 [2024-06-10 11:38:20.353966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.182 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.354302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.354313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.354629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.354640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.354950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.354961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.355296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.355306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.355690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.355701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.355996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.356007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.452 [2024-06-10 11:38:20.356061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.452 [2024-06-10 11:38:20.356070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.452 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.356348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.356359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.356548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.356559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.356843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.356854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.357176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.357186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.357509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.357519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.357746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.357757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.358075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.358086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.358393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.358404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.358587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.358598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.358889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.358899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.359216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.359227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.359450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.359463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.359802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.359813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.360160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.360171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.360481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.360491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.360679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.360689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.361035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.361045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.361365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.361375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.361721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.361731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.361909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.361920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.362237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.362248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.362586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.362596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.362913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.362924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.363260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.363270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.363458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.363469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.363787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.363798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.364139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.364149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.364486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.364496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.364719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.364730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.453 qpair failed and we were unable to recover it. 00:31:23.453 [2024-06-10 11:38:20.364905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.453 [2024-06-10 11:38:20.364915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.365102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.365112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.365397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.365406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.365731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.365742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.366076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.366087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.366136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.366144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.366457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.366468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.366735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.366747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.366939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.366949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.367237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.367247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.367443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.367453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.367497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.367505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.367837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.367847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.368185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.368195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.368512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.368522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.368839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.368849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.369117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.369128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.369449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.369459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.369776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.369787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.370125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.370136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.370428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.370438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.370493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.370501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.370811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.370826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.371145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.371155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.371444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.371455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.371774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.371784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.372115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.372125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.372303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.372314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.372635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.372646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.372988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.372999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.373313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.373324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.373376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.373384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.373683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.373693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.454 qpair failed and we were unable to recover it. 00:31:23.454 [2024-06-10 11:38:20.374027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.454 [2024-06-10 11:38:20.374038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.374356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.374367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.374554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.374564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.374748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.374759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.375058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.375068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.375254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.375265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.375564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.375575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.375910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.375920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.376244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.376255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.376594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.376605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.376946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.376957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.377285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.377296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.377488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.377498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.377836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.377847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.378015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.378025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.378328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.378338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.378656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.378667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.378981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.378993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.379388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.379398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.379706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.379717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.380055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.380065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.380416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.380427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.380765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.380776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.380968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.380978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.381164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.381174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.381503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.381513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.381820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.381834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.382022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.382031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.382234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.382245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.382418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.382430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.382616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.382625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.382960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.382971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.383121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.383130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.455 qpair failed and we were unable to recover it. 00:31:23.455 [2024-06-10 11:38:20.383354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.455 [2024-06-10 11:38:20.383363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.383556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.383566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.383901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.383911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.384142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.384152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.384469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.384479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.384800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.384810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.385144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.385154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.385365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.385376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.385560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.385571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.385754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.385764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.385949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.385960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.386313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.386323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.386509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.386519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.386840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.386851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.387085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.387095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.387266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.387276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.387556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.387566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.387964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.387973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.388278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.388288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.388537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.388547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.388857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.388868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.389192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.389201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.389520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.389530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.389848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.389859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.390167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.390177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.390495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.390505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.390835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.390846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.391045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.391055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.391286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.391296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.391481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.391491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.391660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.391669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.391907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.391917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.392169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.392179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.456 qpair failed and we were unable to recover it. 00:31:23.456 [2024-06-10 11:38:20.392515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.456 [2024-06-10 11:38:20.392525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.392914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.392924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.392968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.392976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.393301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.393315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.393634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.393644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.393996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.394006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.394302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.394312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.394636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.394646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.394963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.394973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.395312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.395322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.395490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.395499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.395674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.395683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.396024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.396034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.396359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.396370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.396687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.396697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.397032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.397042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.397296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.397307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.397496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.397505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.397714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.397724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.398038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.398049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.398368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.398378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.398671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.398681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.398999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.399010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.399182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.399191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.399528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.399537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.399854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.399864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.400193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.400202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.400511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.400521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.400794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.400805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.401117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.401127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.401465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.401476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.401685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.401695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.402086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.402096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.457 qpair failed and we were unable to recover it. 00:31:23.457 [2024-06-10 11:38:20.402408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.457 [2024-06-10 11:38:20.402418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.402744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.402754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.403063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.403073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.403444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.403454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.403764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.403774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.403912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.403921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.404094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.404103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.404278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.404287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.404596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.404605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.404951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.404961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.405296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.405308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.405625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.405635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.405806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.405815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.406011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.406021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.406352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.406362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.406549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.406559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.406864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.406874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.407078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.407087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.407369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.407379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.407560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.407571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.407751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.407761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.408075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.408085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.408431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.408441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.408629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.408640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.408988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.408997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.409327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.409336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.458 qpair failed and we were unable to recover it. 00:31:23.458 [2024-06-10 11:38:20.409518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.458 [2024-06-10 11:38:20.409529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.409857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.409866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.409966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.409976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.410301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.410312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.410492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.410501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.410851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.410861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.411193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.411202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.411556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.411566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.411763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.411774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.412099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.412109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.412398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.412408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.412634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.412645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.412983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.412993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.413308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.413318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.413534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.413544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.413906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.413916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:23.459 [2024-06-10 11:38:20.414216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.414233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.414419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.414428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:23.459 [2024-06-10 11:38:20.414748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.414760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:23.459 [2024-06-10 11:38:20.415049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.415060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:23.459 [2024-06-10 11:38:20.415233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.415245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.459 [2024-06-10 11:38:20.415552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.415563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.415608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.415616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.415803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.415813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.416203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.416213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.416543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.416555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.416890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.416900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.417237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.417247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.417463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.417472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.417809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.417819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.418146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.418156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.459 [2024-06-10 11:38:20.418468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.459 [2024-06-10 11:38:20.418479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.459 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.418665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.418675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.418972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.418982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.419153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.419164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.419505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.419517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.419878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.419889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.420217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.420227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.420500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.420511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.420737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.420747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.420871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.420880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.421195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.421205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.421393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.421402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.421609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.421620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.421820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.421838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.422152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.422162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.422476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.422486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.422676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.422686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.422980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.422990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.423308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.423321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.423625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.423635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.423950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.423961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.424300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.424311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.424645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.424656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.424988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.424998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.425285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.425295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.425631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.425641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.425959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.425971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.426150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.426161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.426469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.426479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.426794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.426805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.427130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.427140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.427447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.427459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.427787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.427798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.460 qpair failed and we were unable to recover it. 00:31:23.460 [2024-06-10 11:38:20.428032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.460 [2024-06-10 11:38:20.428045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.428385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.428396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.428745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.428755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.429081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.429091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.429407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.429417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.429765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.429775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.430103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.430113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.430490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.430501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.430690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.430700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.431100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.431110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.431457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.431468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.431836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.431848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.432040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.432050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.432228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.432247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.432565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.432575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.432852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.432863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.433050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.433061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.433448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.433457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.433781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.433791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.434120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.434130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.434317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.434327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.434644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.434654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.434842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.434851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.435036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.435046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.435373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.435383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.435621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.435633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.435774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.435784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.436140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.436150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.436198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.436206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.436504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.436513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.436858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.436869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.437057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.437068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.437392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.437402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.461 qpair failed and we were unable to recover it. 00:31:23.461 [2024-06-10 11:38:20.437748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.461 [2024-06-10 11:38:20.437760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.437947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.437959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.438300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.438310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.438660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.438671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.438989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.438999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.439330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.439340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.439694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.439703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.440047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.440057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.440104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.440113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.440437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.440447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.440632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.440642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.440977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.440988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.441274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.441284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.441328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.441337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.441643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.441653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.441837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.441848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.442136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.442147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.442462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.442472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.442696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.442706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.442956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.442968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.443293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.443304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.443630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.443641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.443713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.443723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.443899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.443909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.444113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.444124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.444310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.444320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.444647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.444657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.444985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.444995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.445314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.445324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.445640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.445650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.445991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.446001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.446317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.446327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.446727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.446740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.462 [2024-06-10 11:38:20.447056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.462 [2024-06-10 11:38:20.447067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.462 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.447393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.447404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.447715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.447725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.447912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.447924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.448257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.448267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.448318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.448326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.448461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.448471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.448687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.448697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.449041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.449051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.449230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.449240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.449572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.449582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.449902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.449912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.450215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.450225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.450416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.450427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.450705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.450716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.450934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.450943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.451125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.451135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.451452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.451462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.451796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.451807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.451988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.451998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.452175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.452184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.452483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.452494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.452802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.452812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.453126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.453137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.453481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.453491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.453676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.453686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 [2024-06-10 11:38:20.453884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.453894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.463 qpair failed and we were unable to recover it. 00:31:23.463 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.463 [2024-06-10 11:38:20.454200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.463 [2024-06-10 11:38:20.454211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.454497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.454509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.464 [2024-06-10 11:38:20.454828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.454840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.464 [2024-06-10 11:38:20.455032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.455043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.464 [2024-06-10 11:38:20.455326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.455337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.455651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.455662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.455712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.455722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.456034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.456044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.456347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.456357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.456748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.456757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.456944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.456957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.457266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.457276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.457587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.457597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.457937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.457947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.458264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.458274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.458611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.458620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.458842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.458853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.459172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.459182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.459345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.459354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.459532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.459542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.459862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.459873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.460064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.460073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.460417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.460427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.460761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.460771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.460958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.460968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.461287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.461298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.461615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.461625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.462011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.462023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.462337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.462347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.462538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.462548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.462888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.462899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.463086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.464 [2024-06-10 11:38:20.463097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.464 qpair failed and we were unable to recover it. 00:31:23.464 [2024-06-10 11:38:20.463437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.463447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.463763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.463774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.464087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.464097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.464411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.464422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.464743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.464754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.465061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.465074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.465383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.465393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.465464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.465472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.465621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.465632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.465956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.465967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.466335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.466346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.466651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.466662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.467006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.467017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.467198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.467208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.467381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.467391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.467583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.467594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.467918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.467928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.468239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.468249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.468560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.468570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.468868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.468878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.469118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.469128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.469463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.469473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.469698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 Malloc0 00:31:23.465 [2024-06-10 11:38:20.469708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.465 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:23.465 [2024-06-10 11:38:20.470064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.470075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.465 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.465 [2024-06-10 11:38:20.470382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.470392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.470704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.470714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.471045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.471055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.471221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.471231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.471432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.471441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.471726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.471736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.472115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.472128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.465 [2024-06-10 11:38:20.472377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.465 [2024-06-10 11:38:20.472387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.465 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.472700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.472710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.473048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.473058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.473213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.466 [2024-06-10 11:38:20.473374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.473383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.473572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.473582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.473883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.473893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.474086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.474098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.474277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.474287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.474632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.474642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.474925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.474935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.475182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.475192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.475504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.475514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.475849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.475862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.476208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.476218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.476524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.476534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.476587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.476596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.476878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.476888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.477088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.477098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.477434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.477445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.477831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.477841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.478131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.478140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.478437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.478446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.478772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.478783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.479111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.479121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.479456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.479466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.479781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.479791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.480107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.480117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.480411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.480421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.480669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.480679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.480994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.481004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 [2024-06-10 11:38:20.481335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.481346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.466 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.466 [2024-06-10 11:38:20.481676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.466 [2024-06-10 11:38:20.481686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.466 qpair failed and we were unable to recover it. 00:31:23.466 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.467 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.467 [2024-06-10 11:38:20.482045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.482055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.482136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.482144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.482442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.482452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.482759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.482768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.483107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.483118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.483435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.483445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.483695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.483705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.483841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.483851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.484115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.484125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.484314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.484324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.484614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.484624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.484963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.484974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.485164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.485175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.485503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.485512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.485881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.485891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.486218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.486227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.486535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.486545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.486882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.486892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.487081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.487093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.487371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.487381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.487570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.487581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.487774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.487784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.487975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.487985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.488306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.488316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.488636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.488646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.488953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.488963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.489269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.489279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.489589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.489600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.467 [2024-06-10 11:38:20.489790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.489800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.467 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.467 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.467 [2024-06-10 11:38:20.490091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.490101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.490480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.467 [2024-06-10 11:38:20.490490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.467 qpair failed and we were unable to recover it. 00:31:23.467 [2024-06-10 11:38:20.490678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.490687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.490978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.490988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.491203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.491214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.491549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.491558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.491943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.491953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.492293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.492303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.492493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.492502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.492834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.492844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.493159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.493169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.493397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.493407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.493592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.493601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.493941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.493951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.494286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.494298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.494606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.494616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.494799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.494809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.495114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.495124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.495463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.495472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.495796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.495806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.496109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.496119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.496317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.496328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.496386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.496397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.496658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.496668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.496842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.496853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.497153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.497162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 [2024-06-10 11:38:20.497479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.497489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.468 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.468 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.468 [2024-06-10 11:38:20.497830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.468 [2024-06-10 11:38:20.497840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.468 qpair failed and we were unable to recover it. 00:31:23.468 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.468 [2024-06-10 11:38:20.498032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.498043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.498281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.498291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.498602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.498612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.498956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.498966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.499156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.499166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.499369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.499379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.499719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.499729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.500067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.500077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.500269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.500279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.500484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.500494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.500819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.500832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.501124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.501136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.501448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.469 [2024-06-10 11:38:20.501474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.469 [2024-06-10 11:38:20.501483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0600000b90 with addr=10.0.0.2, port=4420 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.503815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.503903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.503920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.503929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.503935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 [2024-06-10 11:38:20.503955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.469 [2024-06-10 11:38:20.513771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.513834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.513851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.513858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.513864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 11:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1734430 00:31:23.469 [2024-06-10 11:38:20.513880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.523792] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.523863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.523879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.523886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.523892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 [2024-06-10 11:38:20.523907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.533682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.533749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.533764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.533772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.533778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 [2024-06-10 11:38:20.533792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.543770] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.543848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.543864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.543870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.543877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 [2024-06-10 11:38:20.543891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.553834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.469 [2024-06-10 11:38:20.553890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.469 [2024-06-10 11:38:20.553905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.469 [2024-06-10 11:38:20.553912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.469 [2024-06-10 11:38:20.553918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.469 [2024-06-10 11:38:20.553932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.469 qpair failed and we were unable to recover it. 00:31:23.469 [2024-06-10 11:38:20.563828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.563889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.563904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.563911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.563917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.563931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.573871] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.573937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.573955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.573962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.573968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.573982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.583869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.583931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.583946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.583954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.583960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.583974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.593886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.593947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.593962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.593969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.593975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.593989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.603925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.603976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.603991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.603998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.604004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.604017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.613967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.614027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.614042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.614049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.614055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.614072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.624026] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.624088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.624103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.624110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.624116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.624130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.634064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.634143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.634158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.634164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.634170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.634184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.644059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.644111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.644126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.644133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.644138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.644152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.653987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.654048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.654063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.654069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.654075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.654089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.470 [2024-06-10 11:38:20.664092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.470 [2024-06-10 11:38:20.664157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.470 [2024-06-10 11:38:20.664176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.470 [2024-06-10 11:38:20.664184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.470 [2024-06-10 11:38:20.664193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.470 [2024-06-10 11:38:20.664207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.470 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.674102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.674154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.674169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.674176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.674182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.674195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.684172] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.684224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.684239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.684245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.684251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.684265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.694052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.694116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.694131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.694138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.694144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.694159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.704197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.704263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.704278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.704285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.704291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.704308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.714268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.714344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.714359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.714365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.714372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.714386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.724247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.724302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.724317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.724323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.724329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.724343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.734369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.734431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.734445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.734452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.734458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.734472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.733 [2024-06-10 11:38:20.744355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.733 [2024-06-10 11:38:20.744414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.733 [2024-06-10 11:38:20.744429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.733 [2024-06-10 11:38:20.744436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.733 [2024-06-10 11:38:20.744442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.733 [2024-06-10 11:38:20.744456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.733 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.754365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.754426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.754441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.754449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.754456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.754471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.764424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.764478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.764493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.764500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.764506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.764519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.774278] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.774338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.774352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.774359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.774365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.774379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.784424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.784482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.784498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.784504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.784510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.784524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.794444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.794497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.794512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.794518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.794530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.794544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.804481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.804541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.804564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.804573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.804580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.804598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.814500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.814558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.814574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.814581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.814587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.814602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.824514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.824574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.824589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.824596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.824602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.824616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.834433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.834491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.834506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.834513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.834521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.834535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.844456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.844525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.844540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.844547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.844553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.844568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.854501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.854563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.854579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.734 [2024-06-10 11:38:20.854586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.734 [2024-06-10 11:38:20.854592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.734 [2024-06-10 11:38:20.854606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.734 qpair failed and we were unable to recover it. 00:31:23.734 [2024-06-10 11:38:20.864599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.734 [2024-06-10 11:38:20.864711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.734 [2024-06-10 11:38:20.864735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.864743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.864749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.864768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.874644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.874704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.874727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.874735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.874743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.874760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.884692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.884750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.884766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.884777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.884784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.884799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.894714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.894770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.894786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.894793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.894799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.894813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.904777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.904844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.904859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.904866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.904872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.904887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.914763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.914828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.914843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.914850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.914857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.914871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.924694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.924749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.924764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.924770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.924776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.924790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.934840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.934936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.934952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.934959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.934965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.934979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.944835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.944898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.944912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.944920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.944926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.944939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.735 [2024-06-10 11:38:20.954766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.735 [2024-06-10 11:38:20.954828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.735 [2024-06-10 11:38:20.954844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.735 [2024-06-10 11:38:20.954851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.735 [2024-06-10 11:38:20.954857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.735 [2024-06-10 11:38:20.954870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.735 qpair failed and we were unable to recover it. 00:31:23.998 [2024-06-10 11:38:20.964905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.998 [2024-06-10 11:38:20.964960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.998 [2024-06-10 11:38:20.964976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.998 [2024-06-10 11:38:20.964983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.998 [2024-06-10 11:38:20.964991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.998 [2024-06-10 11:38:20.965007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.998 qpair failed and we were unable to recover it. 00:31:23.998 [2024-06-10 11:38:20.974882] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.998 [2024-06-10 11:38:20.974937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.998 [2024-06-10 11:38:20.974952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.998 [2024-06-10 11:38:20.974963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.998 [2024-06-10 11:38:20.974968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.998 [2024-06-10 11:38:20.974983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.998 qpair failed and we were unable to recover it. 00:31:23.998 [2024-06-10 11:38:20.984972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.998 [2024-06-10 11:38:20.985037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.998 [2024-06-10 11:38:20.985052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.998 [2024-06-10 11:38:20.985059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.998 [2024-06-10 11:38:20.985065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.998 [2024-06-10 11:38:20.985079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.998 qpair failed and we were unable to recover it. 00:31:23.998 [2024-06-10 11:38:20.994995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.998 [2024-06-10 11:38:20.995068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.998 [2024-06-10 11:38:20.995084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.998 [2024-06-10 11:38:20.995090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.998 [2024-06-10 11:38:20.995097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.998 [2024-06-10 11:38:20.995111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.998 qpair failed and we were unable to recover it. 00:31:23.998 [2024-06-10 11:38:21.005047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.998 [2024-06-10 11:38:21.005102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.005117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.005124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.005130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.005143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.015048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.015105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.015121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.015128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.015137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.015151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.025096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.025152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.025167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.025174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.025180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.025193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.035115] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.035169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.035184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.035191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.035197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.035210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.045132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.045194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.045209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.045216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.045222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.045235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.055222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.055313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.055329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.055336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.055342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.055356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.065212] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.065279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.065297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.065304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.065310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.065324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.075224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.075281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.075296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.075303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.075309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.075322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.085264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.085322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.085337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.085344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.085350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.085363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.095288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.095344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.095358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.095365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.095371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.095385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.105325] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.105385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.105400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.105407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.999 [2024-06-10 11:38:21.105413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:23.999 [2024-06-10 11:38:21.105430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:23.999 qpair failed and we were unable to recover it. 00:31:23.999 [2024-06-10 11:38:21.115333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.999 [2024-06-10 11:38:21.115433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.999 [2024-06-10 11:38:21.115449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.999 [2024-06-10 11:38:21.115455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.115461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.115475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.125350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.125404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.125419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.125425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.125431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.125445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.135392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.135451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.135467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.135474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.135479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.135493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.145434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.145495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.145510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.145517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.145523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.145536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.155482] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.155578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.155606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.155615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.155621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.155639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.165474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.165531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.165547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.165554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.165560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.165575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.175508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.175563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.175579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.175586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.175592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.175605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.185541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.185601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.185616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.185623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.185629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.185643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.195554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.195611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.195626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.195633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.195642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.195657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.205637] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.205712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.205727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.205735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.205742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.205755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.000 [2024-06-10 11:38:21.215617] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.000 [2024-06-10 11:38:21.215674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.000 [2024-06-10 11:38:21.215689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.000 [2024-06-10 11:38:21.215696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.000 [2024-06-10 11:38:21.215702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.000 [2024-06-10 11:38:21.215715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.000 qpair failed and we were unable to recover it. 00:31:24.263 [2024-06-10 11:38:21.225646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.263 [2024-06-10 11:38:21.225702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.263 [2024-06-10 11:38:21.225717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.263 [2024-06-10 11:38:21.225726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.263 [2024-06-10 11:38:21.225732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.263 [2024-06-10 11:38:21.225746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.263 qpair failed and we were unable to recover it. 00:31:24.263 [2024-06-10 11:38:21.235680] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.263 [2024-06-10 11:38:21.235735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.263 [2024-06-10 11:38:21.235750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.263 [2024-06-10 11:38:21.235757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.263 [2024-06-10 11:38:21.235763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.263 [2024-06-10 11:38:21.235776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.263 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.245842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.245907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.245922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.245929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.245935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.245948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.255731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.255786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.255801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.255808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.255814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.255831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.265763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.265825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.265841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.265848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.265854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.265868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.275662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.275722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.275738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.275744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.275750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.275763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.285700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.285756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.285771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.285782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.285788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.285802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.295845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.295920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.295937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.295945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.295951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.295965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.305861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.305923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.305938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.305945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.305951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.305965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.315933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.315992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.316007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.316014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.316020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.316034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.325931] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.325982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.325997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.326004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.264 [2024-06-10 11:38:21.326010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.264 [2024-06-10 11:38:21.326024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.264 qpair failed and we were unable to recover it. 00:31:24.264 [2024-06-10 11:38:21.335965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.264 [2024-06-10 11:38:21.336028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.264 [2024-06-10 11:38:21.336044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.264 [2024-06-10 11:38:21.336051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.336057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.336070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.345948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.346020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.346035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.346042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.346048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.346062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.356008] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.356065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.356081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.356088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.356094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.356108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.365955] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.366011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.366027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.366034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.366041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.366055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.375942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.376001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.376016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.376026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.376032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.376046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.386067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.386134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.386149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.386156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.386162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.386175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.396096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.396148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.396164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.396170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.396176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.396190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.406126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.406182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.406197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.406204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.406210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.406223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.416106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.416201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.416217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.416224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.416230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.416244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.426264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.265 [2024-06-10 11:38:21.426329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.265 [2024-06-10 11:38:21.426344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.265 [2024-06-10 11:38:21.426351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.265 [2024-06-10 11:38:21.426357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.265 [2024-06-10 11:38:21.426370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.265 qpair failed and we were unable to recover it. 00:31:24.265 [2024-06-10 11:38:21.436241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.266 [2024-06-10 11:38:21.436301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.266 [2024-06-10 11:38:21.436317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.266 [2024-06-10 11:38:21.436324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.266 [2024-06-10 11:38:21.436330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.266 [2024-06-10 11:38:21.436343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.266 qpair failed and we were unable to recover it. 00:31:24.266 [2024-06-10 11:38:21.446213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.266 [2024-06-10 11:38:21.446272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.266 [2024-06-10 11:38:21.446287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.266 [2024-06-10 11:38:21.446294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.266 [2024-06-10 11:38:21.446300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.266 [2024-06-10 11:38:21.446313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.266 qpair failed and we were unable to recover it. 00:31:24.266 [2024-06-10 11:38:21.456277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.266 [2024-06-10 11:38:21.456334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.266 [2024-06-10 11:38:21.456350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.266 [2024-06-10 11:38:21.456356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.266 [2024-06-10 11:38:21.456362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.266 [2024-06-10 11:38:21.456376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.266 qpair failed and we were unable to recover it. 00:31:24.266 [2024-06-10 11:38:21.466347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.266 [2024-06-10 11:38:21.466408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.266 [2024-06-10 11:38:21.466426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.266 [2024-06-10 11:38:21.466433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.266 [2024-06-10 11:38:21.466439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.266 [2024-06-10 11:38:21.466452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.266 qpair failed and we were unable to recover it. 00:31:24.266 [2024-06-10 11:38:21.476347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.266 [2024-06-10 11:38:21.476402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.266 [2024-06-10 11:38:21.476417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.266 [2024-06-10 11:38:21.476424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.266 [2024-06-10 11:38:21.476430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.266 [2024-06-10 11:38:21.476444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.266 qpair failed and we were unable to recover it. 00:31:24.266 [2024-06-10 11:38:21.486369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.486421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.486437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.486444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.486452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.486467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.496315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.496374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.496390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.496397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.496402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.496416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.506434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.506493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.506508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.506515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.506521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.506538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.516444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.516501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.516524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.516533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.516540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.516558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.526519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.526598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.526614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.526622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.526628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.526642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.536524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.536591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.536614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.536622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.536630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.536648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.546581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.546658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.546682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.546691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.546697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.546715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.556559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.556610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.556634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.556642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.556648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.556663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.566561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.566618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.566633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.566640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.566647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.566661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.576609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.576671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.576687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.576694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.576702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.576716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.586632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.586692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.529 [2024-06-10 11:38:21.586707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.529 [2024-06-10 11:38:21.586714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.529 [2024-06-10 11:38:21.586720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.529 [2024-06-10 11:38:21.586734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.529 qpair failed and we were unable to recover it. 00:31:24.529 [2024-06-10 11:38:21.596667] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.529 [2024-06-10 11:38:21.596724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.596739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.596746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.596756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.596770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.606708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.606762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.606778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.606785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.606790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.606804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.616721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.616780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.616795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.616802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.616808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.616826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.626760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.626824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.626840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.626846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.626852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.626867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.636777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.636858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.636874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.636882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.636887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.636901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.646703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.646760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.646775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.646782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.646788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.646801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.656729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.656789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.656804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.656811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.656817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.656834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.666858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.666921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.666936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.666943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.666949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.666962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.676935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.677033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.677048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.677055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.677061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.677074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.686936] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.686993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.687008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.687015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.530 [2024-06-10 11:38:21.687025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.530 [2024-06-10 11:38:21.687039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.530 qpair failed and we were unable to recover it. 00:31:24.530 [2024-06-10 11:38:21.696951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.530 [2024-06-10 11:38:21.697012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.530 [2024-06-10 11:38:21.697027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.530 [2024-06-10 11:38:21.697034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.697040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.697054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.531 [2024-06-10 11:38:21.706990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.531 [2024-06-10 11:38:21.707185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.531 [2024-06-10 11:38:21.707202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.531 [2024-06-10 11:38:21.707209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.707215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.707229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.531 [2024-06-10 11:38:21.717001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.531 [2024-06-10 11:38:21.717057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.531 [2024-06-10 11:38:21.717072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.531 [2024-06-10 11:38:21.717078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.717085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.717098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.531 [2024-06-10 11:38:21.726968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.531 [2024-06-10 11:38:21.727023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.531 [2024-06-10 11:38:21.727039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.531 [2024-06-10 11:38:21.727046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.727051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.727066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.531 [2024-06-10 11:38:21.737143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.531 [2024-06-10 11:38:21.737208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.531 [2024-06-10 11:38:21.737223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.531 [2024-06-10 11:38:21.737230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.737236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.737250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.531 [2024-06-10 11:38:21.747052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.531 [2024-06-10 11:38:21.747113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.531 [2024-06-10 11:38:21.747129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.531 [2024-06-10 11:38:21.747136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.531 [2024-06-10 11:38:21.747142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.531 [2024-06-10 11:38:21.747156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.531 qpair failed and we were unable to recover it. 00:31:24.793 [2024-06-10 11:38:21.757121] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.757179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.757194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.757201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.757207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.757221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.767157] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.767223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.767238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.767245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.767251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.767265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.777071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.777129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.777144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.777155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.777161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.777174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.787190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.787250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.787266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.787273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.787279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.787294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.797238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.797291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.797306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.797313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.797319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.797333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.807268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.807322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.807336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.807343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.807349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.807363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.817284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.817340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.817354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.817361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.817367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.817381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.827307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.827366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.827381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.827387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.827393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.827407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.837344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.837399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.837414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.837420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.837427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.837440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.847374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.847430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.847444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.847451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.794 [2024-06-10 11:38:21.847457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.794 [2024-06-10 11:38:21.847470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.794 qpair failed and we were unable to recover it. 00:31:24.794 [2024-06-10 11:38:21.857400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.794 [2024-06-10 11:38:21.857459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.794 [2024-06-10 11:38:21.857475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.794 [2024-06-10 11:38:21.857481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.857487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.857501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.867419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.867479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.867497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.867504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.867510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.867523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.877471] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.877524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.877539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.877546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.877552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.877566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.887358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.887414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.887429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.887436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.887442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.887456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.897567] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.897622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.897637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.897644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.897650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.897664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.907548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.907615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.907638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.907646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.907654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.907676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.917599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.917661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.917684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.917692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.917700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.917717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.927583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.927652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.927668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.927675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.927682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.927697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.937637] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.937695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.937710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.937717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.937723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.937737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.947648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.947708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.947723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.947730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.947736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.947750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.957681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.957738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.957758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.957765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.957771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.957786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.967719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.795 [2024-06-10 11:38:21.967773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.795 [2024-06-10 11:38:21.967788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.795 [2024-06-10 11:38:21.967795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.795 [2024-06-10 11:38:21.967802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.795 [2024-06-10 11:38:21.967815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.795 qpair failed and we were unable to recover it. 00:31:24.795 [2024-06-10 11:38:21.977748] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.796 [2024-06-10 11:38:21.977806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.796 [2024-06-10 11:38:21.977825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.796 [2024-06-10 11:38:21.977833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.796 [2024-06-10 11:38:21.977839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.796 [2024-06-10 11:38:21.977853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.796 qpair failed and we were unable to recover it. 00:31:24.796 [2024-06-10 11:38:21.987778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.796 [2024-06-10 11:38:21.987836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.796 [2024-06-10 11:38:21.987852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.796 [2024-06-10 11:38:21.987859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.796 [2024-06-10 11:38:21.987865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.796 [2024-06-10 11:38:21.987879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.796 qpair failed and we were unable to recover it. 00:31:24.796 [2024-06-10 11:38:21.997686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.796 [2024-06-10 11:38:21.997742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.796 [2024-06-10 11:38:21.997757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.796 [2024-06-10 11:38:21.997764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.796 [2024-06-10 11:38:21.997773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.796 [2024-06-10 11:38:21.997786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.796 qpair failed and we were unable to recover it. 00:31:24.796 [2024-06-10 11:38:22.007860] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.796 [2024-06-10 11:38:22.007915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.796 [2024-06-10 11:38:22.007930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.796 [2024-06-10 11:38:22.007937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.796 [2024-06-10 11:38:22.007942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:24.796 [2024-06-10 11:38:22.007956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.796 qpair failed and we were unable to recover it. 00:31:25.059 [2024-06-10 11:38:22.017869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.059 [2024-06-10 11:38:22.017927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.017942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.017949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.017956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.017969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.027886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.027946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.027961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.027968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.027974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.027988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.037947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.038028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.038043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.038050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.038056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.038070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.047928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.047987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.048002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.048009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.048016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.048029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.057946] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.058003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.058018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.058025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.058031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.058045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.068019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.068104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.068119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.068127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.068133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.068146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.060 [2024-06-10 11:38:22.078029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.060 [2024-06-10 11:38:22.078085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.060 [2024-06-10 11:38:22.078100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.060 [2024-06-10 11:38:22.078107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.060 [2024-06-10 11:38:22.078113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.060 [2024-06-10 11:38:22.078128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.060 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.088044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.088099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.088114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.088121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.088130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.088144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.098070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.098136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.098151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.098158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.098164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.098178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.108110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.108183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.108199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.108208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.108214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.108229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.118017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.118086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.118101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.118108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.118115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.118129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.128132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.128183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.128197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.128204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.128211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.128225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.138195] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.138254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.061 [2024-06-10 11:38:22.138269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.061 [2024-06-10 11:38:22.138276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.061 [2024-06-10 11:38:22.138282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.061 [2024-06-10 11:38:22.138295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.061 qpair failed and we were unable to recover it. 00:31:25.061 [2024-06-10 11:38:22.148232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.061 [2024-06-10 11:38:22.148294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.148309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.148315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.148321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.148335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.158258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.158366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.158381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.158388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.158394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.158408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.168292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.168365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.168380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.168387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.168394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.168408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.178298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.178353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.178367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.178379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.178385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.178398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.188328] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.188390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.188405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.188412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.188418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.188431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.198358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.198415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.198430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.062 [2024-06-10 11:38:22.198437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.062 [2024-06-10 11:38:22.198443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.062 [2024-06-10 11:38:22.198456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.062 qpair failed and we were unable to recover it. 00:31:25.062 [2024-06-10 11:38:22.208380] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.062 [2024-06-10 11:38:22.208438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.062 [2024-06-10 11:38:22.208452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.208459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.208465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.208478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.218304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.218367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.218382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.218388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.218394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.218407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.228449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.228508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.228523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.228529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.228535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.228548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.238469] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.238541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.238564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.238573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.238580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.238598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.248490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.248548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.248565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.248572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.248578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.248594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.258521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.258582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.258598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.258605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.063 [2024-06-10 11:38:22.258611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.063 [2024-06-10 11:38:22.258625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.063 qpair failed and we were unable to recover it. 00:31:25.063 [2024-06-10 11:38:22.268590] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.063 [2024-06-10 11:38:22.268670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.063 [2024-06-10 11:38:22.268688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.063 [2024-06-10 11:38:22.268697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.064 [2024-06-10 11:38:22.268703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.064 [2024-06-10 11:38:22.268717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.064 qpair failed and we were unable to recover it. 00:31:25.064 [2024-06-10 11:38:22.278460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.064 [2024-06-10 11:38:22.278517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.064 [2024-06-10 11:38:22.278532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.064 [2024-06-10 11:38:22.278539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.064 [2024-06-10 11:38:22.278545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.064 [2024-06-10 11:38:22.278559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.064 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.288622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.288677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.288692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.288699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.288705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.288719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.298521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.298595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.298610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.298617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.298624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.298637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.308665] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.308739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.308755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.308763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.308770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.308790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.318615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.318673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.318689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.318696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.318702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.318716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.328703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.328761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.328776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.328783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.328790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.328804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.338784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.338846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.338861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.338868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.338874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.338888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.348656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.348713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.348728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.348735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.348741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.348754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.358787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.358890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.358909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.358918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.358924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.358938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.368711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.368769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.368784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.368792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.368798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.328 [2024-06-10 11:38:22.368812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.328 qpair failed and we were unable to recover it. 00:31:25.328 [2024-06-10 11:38:22.378850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.328 [2024-06-10 11:38:22.378907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.328 [2024-06-10 11:38:22.378922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.328 [2024-06-10 11:38:22.378930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.328 [2024-06-10 11:38:22.378936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.378950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.388885] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.388946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.388961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.388968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.388974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.388988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.398902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.398959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.398974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.398981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.398988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.399006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.408944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.408998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.409013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.409020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.409026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.409040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.418961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.419055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.419071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.419078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.419084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.419098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.428982] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.429043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.429058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.429065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.429071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.429085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.439002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.439057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.439072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.439079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.439085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.439099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.449022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.449087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.449102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.449109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.449115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.449128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.459095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.459151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.459166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.459173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.459179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.459193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.469102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.469161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.469176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.469183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.469189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.469203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.479093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.479150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.479165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.479172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.479178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.479192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.489181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.489237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.329 [2024-06-10 11:38:22.489252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.329 [2024-06-10 11:38:22.489258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.329 [2024-06-10 11:38:22.489267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.329 [2024-06-10 11:38:22.489281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.329 qpair failed and we were unable to recover it. 00:31:25.329 [2024-06-10 11:38:22.499133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.329 [2024-06-10 11:38:22.499191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.499206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.499213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.499219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.499232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.330 [2024-06-10 11:38:22.509191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.330 [2024-06-10 11:38:22.509251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.509266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.509273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.509279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.509293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.330 [2024-06-10 11:38:22.519204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.330 [2024-06-10 11:38:22.519258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.519273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.519280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.519286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.519300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.330 [2024-06-10 11:38:22.529266] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.330 [2024-06-10 11:38:22.529369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.529385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.529392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.529398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.529411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.330 [2024-06-10 11:38:22.539180] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.330 [2024-06-10 11:38:22.539238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.539253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.539260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.539266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.539280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.330 [2024-06-10 11:38:22.549309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.330 [2024-06-10 11:38:22.549369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.330 [2024-06-10 11:38:22.549384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.330 [2024-06-10 11:38:22.549391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.330 [2024-06-10 11:38:22.549397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.330 [2024-06-10 11:38:22.549411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.330 qpair failed and we were unable to recover it. 00:31:25.591 [2024-06-10 11:38:22.559401] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.591 [2024-06-10 11:38:22.559467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.591 [2024-06-10 11:38:22.559483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.591 [2024-06-10 11:38:22.559490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.591 [2024-06-10 11:38:22.559496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.591 [2024-06-10 11:38:22.559510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.591 qpair failed and we were unable to recover it. 00:31:25.591 [2024-06-10 11:38:22.569253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.591 [2024-06-10 11:38:22.569309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.591 [2024-06-10 11:38:22.569324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.591 [2024-06-10 11:38:22.569331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.591 [2024-06-10 11:38:22.569337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.591 [2024-06-10 11:38:22.569351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.591 qpair failed and we were unable to recover it. 00:31:25.591 [2024-06-10 11:38:22.579386] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.591 [2024-06-10 11:38:22.579443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.591 [2024-06-10 11:38:22.579458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.591 [2024-06-10 11:38:22.579468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.591 [2024-06-10 11:38:22.579474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.579488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.589417] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.589479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.589494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.589501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.589507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.589521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.599345] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.599398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.599413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.599420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.599426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.599440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.609432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.609490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.609505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.609512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.609518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.609532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.619500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.619557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.619572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.619579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.619585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.619599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.629518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.629592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.629608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.629614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.629620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.629635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.639557] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.639617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.639641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.639649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.639657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.639676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.649638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.649703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.649719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.649726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.649732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.649748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.659611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.659665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.659681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.659689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.659695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.659710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.669621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.669684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.669700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.669711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.669717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.669732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.679543] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.679604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.679619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.679626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.679632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.592 [2024-06-10 11:38:22.679647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.592 qpair failed and we were unable to recover it. 00:31:25.592 [2024-06-10 11:38:22.689640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.592 [2024-06-10 11:38:22.689694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.592 [2024-06-10 11:38:22.689710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.592 [2024-06-10 11:38:22.689717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.592 [2024-06-10 11:38:22.689723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.689737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.699653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.699715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.699730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.699737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.699743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.699756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.709661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.709761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.709776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.709783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.709790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.709804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.719797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.719867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.719882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.719889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.719897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.719910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.729779] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.729836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.729851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.729858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.729864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.729878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.739827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.739885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.739899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.739906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.739913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.739926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.749866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.749930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.749944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.749951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.749957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.749971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.759753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.759808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.759830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.759838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.759844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.759857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.769909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.769964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.769978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.769985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.769991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.770005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.779941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.780020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.780035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.780041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.780047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.780060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.789930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.789993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.790008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.790015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.790021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.593 [2024-06-10 11:38:22.790034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.593 qpair failed and we were unable to recover it. 00:31:25.593 [2024-06-10 11:38:22.799994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.593 [2024-06-10 11:38:22.800050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.593 [2024-06-10 11:38:22.800065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.593 [2024-06-10 11:38:22.800072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.593 [2024-06-10 11:38:22.800078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.594 [2024-06-10 11:38:22.800095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.594 qpair failed and we were unable to recover it. 00:31:25.594 [2024-06-10 11:38:22.810024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.594 [2024-06-10 11:38:22.810123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.594 [2024-06-10 11:38:22.810138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.594 [2024-06-10 11:38:22.810145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.594 [2024-06-10 11:38:22.810151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.594 [2024-06-10 11:38:22.810165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.594 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.820064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.820118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.856 [2024-06-10 11:38:22.820133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.856 [2024-06-10 11:38:22.820140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.856 [2024-06-10 11:38:22.820146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.856 [2024-06-10 11:38:22.820159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.856 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.830098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.830158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.856 [2024-06-10 11:38:22.830173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.856 [2024-06-10 11:38:22.830180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.856 [2024-06-10 11:38:22.830186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.856 [2024-06-10 11:38:22.830199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.856 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.840058] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.840121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.856 [2024-06-10 11:38:22.840136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.856 [2024-06-10 11:38:22.840143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.856 [2024-06-10 11:38:22.840149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.856 [2024-06-10 11:38:22.840162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.856 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.850144] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.850206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.856 [2024-06-10 11:38:22.850224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.856 [2024-06-10 11:38:22.850231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.856 [2024-06-10 11:38:22.850237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.856 [2024-06-10 11:38:22.850251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.856 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.860163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.860225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.856 [2024-06-10 11:38:22.860240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.856 [2024-06-10 11:38:22.860246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.856 [2024-06-10 11:38:22.860252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.856 [2024-06-10 11:38:22.860266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.856 qpair failed and we were unable to recover it. 00:31:25.856 [2024-06-10 11:38:22.870186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.856 [2024-06-10 11:38:22.870276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.870291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.870298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.870304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.870318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.880107] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.880169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.880185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.880192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.880198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.880211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.890221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.890274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.890289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.890296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.890305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.890319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.900303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.900378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.900393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.900400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.900407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.900420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.910286] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.910346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.910361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.910368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.910374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.910388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.920251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.920306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.920321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.920328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.920334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.920348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.930230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.930282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.930297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.930304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.930311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.930324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.940386] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.940447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.940462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.940469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.940475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.940488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.950371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.950484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.950499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.950506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.950512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.950525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.960421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.960475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.960490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.960497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.960503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.960516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.970498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.970599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.970622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.970631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.970637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.857 [2024-06-10 11:38:22.970655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.857 qpair failed and we were unable to recover it. 00:31:25.857 [2024-06-10 11:38:22.980493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.857 [2024-06-10 11:38:22.980556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.857 [2024-06-10 11:38:22.980580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.857 [2024-06-10 11:38:22.980596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.857 [2024-06-10 11:38:22.980603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:22.980621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:22.990529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:22.990595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:22.990618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:22.990626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:22.990633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:22.990652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.000549] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.000607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.000624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.000631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.000638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.000652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.010583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.010636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.010651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.010658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.010664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.010679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.020666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.020721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.020736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.020744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.020750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.020763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.030652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.030712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.030728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.030736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.030742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.030756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.040659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.040729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.040745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.040752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.040759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.040773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.050688] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.050741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.050755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.050762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.050768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.050782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.060879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.060946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.060961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.060968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.060974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.060988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:25.858 [2024-06-10 11:38:23.070639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.858 [2024-06-10 11:38:23.070709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.858 [2024-06-10 11:38:23.070723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.858 [2024-06-10 11:38:23.070734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.858 [2024-06-10 11:38:23.070740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:25.858 [2024-06-10 11:38:23.070754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.858 qpair failed and we were unable to recover it. 00:31:26.124 [2024-06-10 11:38:23.080663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.124 [2024-06-10 11:38:23.080721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.124 [2024-06-10 11:38:23.080736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.124 [2024-06-10 11:38:23.080743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.124 [2024-06-10 11:38:23.080750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.124 [2024-06-10 11:38:23.080763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.124 qpair failed and we were unable to recover it. 00:31:26.124 [2024-06-10 11:38:23.090796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.124 [2024-06-10 11:38:23.090855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.124 [2024-06-10 11:38:23.090871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.124 [2024-06-10 11:38:23.090877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.124 [2024-06-10 11:38:23.090883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.124 [2024-06-10 11:38:23.090898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.124 qpair failed and we were unable to recover it. 00:31:26.124 [2024-06-10 11:38:23.100841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.124 [2024-06-10 11:38:23.100895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.124 [2024-06-10 11:38:23.100910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.124 [2024-06-10 11:38:23.100917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.124 [2024-06-10 11:38:23.100923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.100937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.110862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.110925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.110939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.110946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.110952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.110966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.120886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.120949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.120964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.120971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.120976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.120990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.130791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.130848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.130863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.130870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.130875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.130889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.140960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.141019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.141033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.141040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.141046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.141060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.150960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.151018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.151033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.151040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.151046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.151060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.160991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.161053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.161071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.161079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.161085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.161099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.171011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.171066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.171082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.171089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.171095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.171111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.181052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.181110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.181125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.181132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.181138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.181152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.191099] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.191172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.191188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.191198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.191204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.191219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.201128] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.201184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.201200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.201207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.201213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.201230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.211152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.211247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.125 [2024-06-10 11:38:23.211263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.125 [2024-06-10 11:38:23.211270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.125 [2024-06-10 11:38:23.211276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.125 [2024-06-10 11:38:23.211290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.125 qpair failed and we were unable to recover it. 00:31:26.125 [2024-06-10 11:38:23.221185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.125 [2024-06-10 11:38:23.221294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.221309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.221317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.221323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.221336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.231147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.231236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.231252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.231259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.231265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.231280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.241236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.241296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.241312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.241319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.241325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.241339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.251317] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.251394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.251412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.251420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.251426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.251439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.261298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.261352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.261367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.261374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.261380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.261394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.271279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.271342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.271357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.271364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.271370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.271384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.281342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.281446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.281462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.281468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.281475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.281488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.291371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.291449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.291464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.291472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.291482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.291495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.301415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.301470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.301485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.301492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.301498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.301511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.311449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.311510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.311525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.311532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.311538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.311551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.321353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.321406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.321422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.321428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.321434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.321447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.126 [2024-06-10 11:38:23.331480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.126 [2024-06-10 11:38:23.331534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.126 [2024-06-10 11:38:23.331549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.126 [2024-06-10 11:38:23.331555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.126 [2024-06-10 11:38:23.331561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.126 [2024-06-10 11:38:23.331575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.126 qpair failed and we were unable to recover it. 00:31:26.127 [2024-06-10 11:38:23.341529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.127 [2024-06-10 11:38:23.341598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.127 [2024-06-10 11:38:23.341621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.127 [2024-06-10 11:38:23.341629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.127 [2024-06-10 11:38:23.341636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.127 [2024-06-10 11:38:23.341654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.127 qpair failed and we were unable to recover it. 00:31:26.389 [2024-06-10 11:38:23.351524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.389 [2024-06-10 11:38:23.351598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.389 [2024-06-10 11:38:23.351621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.389 [2024-06-10 11:38:23.351629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.389 [2024-06-10 11:38:23.351635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.389 [2024-06-10 11:38:23.351653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.389 qpair failed and we were unable to recover it. 00:31:26.389 [2024-06-10 11:38:23.361498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.389 [2024-06-10 11:38:23.361563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.389 [2024-06-10 11:38:23.361587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.389 [2024-06-10 11:38:23.361595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.389 [2024-06-10 11:38:23.361602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.389 [2024-06-10 11:38:23.361620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.371605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.371660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.371676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.371683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.371689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.371704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.381711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.381768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.381783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.381790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.381800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.381815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.391659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.391725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.391741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.391748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.391754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.391767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.401708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.401774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.401789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.401796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.401802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.401815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.411703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.411758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.411773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.411779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.411785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.411799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.421764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.421817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.421836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.421842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.421848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.421862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.431765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.431856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.431872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.431879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.431885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.431899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.441693] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.441746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.441762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.441769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.441775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.441788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.451752] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.451852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.451867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.451874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.451880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.451895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.461864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.461921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.461937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.461944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.461951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.461965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.471906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.390 [2024-06-10 11:38:23.471998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.390 [2024-06-10 11:38:23.472013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.390 [2024-06-10 11:38:23.472025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.390 [2024-06-10 11:38:23.472031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.390 [2024-06-10 11:38:23.472045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.390 qpair failed and we were unable to recover it. 00:31:26.390 [2024-06-10 11:38:23.481915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.481967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.481982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.481990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.481996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.482010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.491819] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.491881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.491896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.491902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.491909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.491923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.501854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.501951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.501966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.501974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.501981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.501995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.511990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.512077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.512092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.512099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.512105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.512118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.522001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.522058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.522073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.522081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.522088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.522101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.532054] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.532112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.532126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.532133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.532139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.532153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.541964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.542024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.542039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.542046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.542052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.542066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.552056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.552119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.552133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.552140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.552146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.552160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.562041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.562144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.562162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.562169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.562175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.562189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.572159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.572218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.572233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.572239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.572245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.572259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.582184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.582238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.582253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.582260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.582266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.391 [2024-06-10 11:38:23.582279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.391 qpair failed and we were unable to recover it. 00:31:26.391 [2024-06-10 11:38:23.592190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.391 [2024-06-10 11:38:23.592261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.391 [2024-06-10 11:38:23.592275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.391 [2024-06-10 11:38:23.592282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.391 [2024-06-10 11:38:23.592288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.392 [2024-06-10 11:38:23.592301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.392 qpair failed and we were unable to recover it. 00:31:26.392 [2024-06-10 11:38:23.602221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.392 [2024-06-10 11:38:23.602322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.392 [2024-06-10 11:38:23.602337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.392 [2024-06-10 11:38:23.602345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.392 [2024-06-10 11:38:23.602351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.392 [2024-06-10 11:38:23.602368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.392 qpair failed and we were unable to recover it. 00:31:26.392 [2024-06-10 11:38:23.612263] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.392 [2024-06-10 11:38:23.612318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.392 [2024-06-10 11:38:23.612333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.392 [2024-06-10 11:38:23.612339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.392 [2024-06-10 11:38:23.612346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.392 [2024-06-10 11:38:23.612359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.392 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.622363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.622435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.622450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.622456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.622462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.622476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.632313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.632380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.632395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.632402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.632407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.632421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.642350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.642410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.642426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.642433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.642441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.642457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.652371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.652431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.652449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.652457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.652463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.652476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.662416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.662474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.662489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.662497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.662502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.662516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.672436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.672497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.672512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.672519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.672525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.672539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.682464] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.682519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.682534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.682541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.682547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.682560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.692528] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.692590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.692613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.692621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.692635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.692653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.702540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.654 [2024-06-10 11:38:23.702627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.654 [2024-06-10 11:38:23.702650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.654 [2024-06-10 11:38:23.702658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.654 [2024-06-10 11:38:23.702665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.654 [2024-06-10 11:38:23.702684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.654 qpair failed and we were unable to recover it. 00:31:26.654 [2024-06-10 11:38:23.712589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.712689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.712712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.712721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.712728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.712746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.722472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.722563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.722580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.722587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.722594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.722608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.732605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.732711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.732726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.732734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.732740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.732754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.742629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.742686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.742701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.742708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.742714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.742728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.752546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.752605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.752620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.752627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.752633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.752646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.762692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.762743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.762758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.762765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.762771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.762784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.772688] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.772745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.772760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.772767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.772773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.772787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.782758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.782816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.782834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.782841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.782851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.782865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.792790] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.792855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.792870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.792877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.792882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.792896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.802811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.802870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.802885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.802891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.802897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.802911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.812851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.812903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.812918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.812924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.812930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.655 [2024-06-10 11:38:23.812944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.655 qpair failed and we were unable to recover it. 00:31:26.655 [2024-06-10 11:38:23.822760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.655 [2024-06-10 11:38:23.822818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.655 [2024-06-10 11:38:23.822836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.655 [2024-06-10 11:38:23.822843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.655 [2024-06-10 11:38:23.822849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.822862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.656 [2024-06-10 11:38:23.832891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.656 [2024-06-10 11:38:23.832953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.656 [2024-06-10 11:38:23.832968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.656 [2024-06-10 11:38:23.832975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.656 [2024-06-10 11:38:23.832981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.832995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.656 [2024-06-10 11:38:23.842922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.656 [2024-06-10 11:38:23.842978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.656 [2024-06-10 11:38:23.842993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.656 [2024-06-10 11:38:23.843000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.656 [2024-06-10 11:38:23.843006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.843020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.656 [2024-06-10 11:38:23.852987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.656 [2024-06-10 11:38:23.853066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.656 [2024-06-10 11:38:23.853081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.656 [2024-06-10 11:38:23.853089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.656 [2024-06-10 11:38:23.853096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.853109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.656 [2024-06-10 11:38:23.862996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.656 [2024-06-10 11:38:23.863053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.656 [2024-06-10 11:38:23.863068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.656 [2024-06-10 11:38:23.863075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.656 [2024-06-10 11:38:23.863081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.863095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.656 [2024-06-10 11:38:23.872893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.656 [2024-06-10 11:38:23.872951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.656 [2024-06-10 11:38:23.872965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.656 [2024-06-10 11:38:23.872975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.656 [2024-06-10 11:38:23.872982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.656 [2024-06-10 11:38:23.872995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.656 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.882984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.883064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.883079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.883086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.883092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.883106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.893096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.893151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.893165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.893172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.893178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.893191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.903109] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.903201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.903217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.903224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.903230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.903243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.913138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.913204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.913219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.913226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.913232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.913245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.923149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.923207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.923222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.923229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.923235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.923248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.933138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.933192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.933207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.933213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.933219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.933233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.943220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.943274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.943290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.943296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.943302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.943316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.953279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.953340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.953354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.953361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.953367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.953380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.963313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.963376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.963394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.963401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.963407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.963421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.973246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.919 [2024-06-10 11:38:23.973303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.919 [2024-06-10 11:38:23.973317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.919 [2024-06-10 11:38:23.973324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.919 [2024-06-10 11:38:23.973330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.919 [2024-06-10 11:38:23.973344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.919 qpair failed and we were unable to recover it. 00:31:26.919 [2024-06-10 11:38:23.983312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:23.983367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:23.983382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:23.983389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:23.983394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:23.983408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:23.993333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:23.993407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:23.993422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:23.993429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:23.993436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:23.993450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.003394] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.003444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.003459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.003466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.003472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.003489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.013381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.013435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.013449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.013456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.013462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.013476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.023463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.023519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.023534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.023541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.023547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.023561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.033460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.033527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.033550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.033558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.033566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.033584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.043485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.043545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.043561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.043568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.043575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.043590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.053488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.053546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.053565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.053572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.053579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.053592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.063562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.063624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.063647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.063655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.063663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.063681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.073577] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.920 [2024-06-10 11:38:24.073646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.920 [2024-06-10 11:38:24.073669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.920 [2024-06-10 11:38:24.073678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.920 [2024-06-10 11:38:24.073685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.920 [2024-06-10 11:38:24.073703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.920 qpair failed and we were unable to recover it. 00:31:26.920 [2024-06-10 11:38:24.083567] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.083622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.083639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.083646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.083652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.083667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:26.921 [2024-06-10 11:38:24.093643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.093722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.093737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.093744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.093751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.093769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:26.921 [2024-06-10 11:38:24.103562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.103657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.103672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.103679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.103686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.103700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:26.921 [2024-06-10 11:38:24.113639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.113700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.113715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.113722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.113728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.113742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:26.921 [2024-06-10 11:38:24.123715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.123767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.123781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.123788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.123794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.123808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:26.921 [2024-06-10 11:38:24.133739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.921 [2024-06-10 11:38:24.133793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.921 [2024-06-10 11:38:24.133808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.921 [2024-06-10 11:38:24.133814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.921 [2024-06-10 11:38:24.133820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:26.921 [2024-06-10 11:38:24.133839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.921 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.143660] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.143726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.143741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.143748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.143754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.143768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.153795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.153864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.153880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.153886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.153892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.153906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.163835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.163908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.163923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.163930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.163937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.163951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.173856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.173912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.173927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.173934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.173940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.173954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.183929] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.183986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.184000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.184007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.184017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.184031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.193906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.193969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.193985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.193992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.194002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.194017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.203958] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.204010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.184 [2024-06-10 11:38:24.204025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.184 [2024-06-10 11:38:24.204033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.184 [2024-06-10 11:38:24.204039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.184 [2024-06-10 11:38:24.204053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.184 qpair failed and we were unable to recover it. 00:31:27.184 [2024-06-10 11:38:24.213844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.184 [2024-06-10 11:38:24.213898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.213913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.213920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.213926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.213940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.223987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.224042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.224056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.224063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.224069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.224083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.234033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.234097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.234113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.234120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.234125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.234139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.244043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.244145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.244160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.244168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.244175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.244188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.254076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.254132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.254147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.254154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.254160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.254174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.264005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.264064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.264079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.264086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.264092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.264105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.274145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.274203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.274218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.274229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.274235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.274248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.284159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.284212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.284227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.284234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.284241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.284254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.294082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.294137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.294151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.294159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.294164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.294178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.304244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.304315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.304329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.304336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.304343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.304357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.314230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.314293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.314308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.314315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.314321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.185 [2024-06-10 11:38:24.314335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.185 qpair failed and we were unable to recover it. 00:31:27.185 [2024-06-10 11:38:24.324287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.185 [2024-06-10 11:38:24.324345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.185 [2024-06-10 11:38:24.324360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.185 [2024-06-10 11:38:24.324367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.185 [2024-06-10 11:38:24.324373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.324386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.334307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.334361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.334376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.334383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.334389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.334403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.344340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.344402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.344417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.344424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.344430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.344444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.354386] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.354439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.354454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.354460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.354466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.354480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.364400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.364494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.364510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.364521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.364527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.364541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.374425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.374478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.374493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.374500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.374506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.374520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.384500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.384593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.384608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.384615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.384621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.384634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.394376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.394439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.394454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.394461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.394468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.394481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.186 [2024-06-10 11:38:24.404511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.186 [2024-06-10 11:38:24.404592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.186 [2024-06-10 11:38:24.404607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.186 [2024-06-10 11:38:24.404615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.186 [2024-06-10 11:38:24.404621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.186 [2024-06-10 11:38:24.404635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.186 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.414551] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.414605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.414619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.414626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.414632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.414646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.424583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.424640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.424655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.424662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.424668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.424681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.434588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.434646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.434661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.434668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.434674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.434687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.444573] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.444680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.444696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.444704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.444710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.444724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.454659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.454710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.454732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.454739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.454745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.454758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.449 [2024-06-10 11:38:24.464674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.449 [2024-06-10 11:38:24.464732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.449 [2024-06-10 11:38:24.464747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.449 [2024-06-10 11:38:24.464754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.449 [2024-06-10 11:38:24.464760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.449 [2024-06-10 11:38:24.464774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.449 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.474714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.474784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.474799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.474806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.474811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.474828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.484609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.484665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.484680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.484687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.484693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.484707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.494795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.494874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.494890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.494898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.494904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.494922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.504785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.504843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.504859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.504865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.504872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.504885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.514835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.514896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.514911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.514918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.514924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.514938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.524842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.524902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.524917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.524924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.524930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.524944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.534857] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.534914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.534930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.534936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.534942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.534956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.544905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.544960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.544978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.544986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.544991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.545005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.554940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.555003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.555018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.555025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.555031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.555045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.564883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.564974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.564989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.564996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.565002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.565016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.574949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.575067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.575090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.450 [2024-06-10 11:38:24.575098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.450 [2024-06-10 11:38:24.575104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.450 [2024-06-10 11:38:24.575119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.450 qpair failed and we were unable to recover it. 00:31:27.450 [2024-06-10 11:38:24.585014] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.450 [2024-06-10 11:38:24.585071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.450 [2024-06-10 11:38:24.585087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.585094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.585104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.585117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.595002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.595067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.595081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.595089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.595094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.595109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.605045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.605096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.605111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.605118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.605124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.605137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.615098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.615154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.615169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.615176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.615182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.615195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.625124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.625183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.625198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.625205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.625211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.625225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.635139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.635202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.635217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.635224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.635230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.635244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.645047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.645105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.645121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.645128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.645133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.645147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.655079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.655136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.655151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.655157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.655164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.655178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.451 [2024-06-10 11:38:24.665227] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.451 [2024-06-10 11:38:24.665288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.451 [2024-06-10 11:38:24.665303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.451 [2024-06-10 11:38:24.665310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.451 [2024-06-10 11:38:24.665316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.451 [2024-06-10 11:38:24.665329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.451 qpair failed and we were unable to recover it. 00:31:27.713 [2024-06-10 11:38:24.675139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.713 [2024-06-10 11:38:24.675198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.713 [2024-06-10 11:38:24.675213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.713 [2024-06-10 11:38:24.675223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.713 [2024-06-10 11:38:24.675229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.713 [2024-06-10 11:38:24.675243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.713 qpair failed and we were unable to recover it. 00:31:27.713 [2024-06-10 11:38:24.685279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.713 [2024-06-10 11:38:24.685341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.713 [2024-06-10 11:38:24.685356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.713 [2024-06-10 11:38:24.685363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.713 [2024-06-10 11:38:24.685369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.685382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.695311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.695368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.695383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.695390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.695396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.695410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.705338] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.705395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.705410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.705417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.705423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.705436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.715377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.715436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.715451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.715457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.715463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.715477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.725368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.725429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.725444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.725451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.725457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.725470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.735391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.735450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.735465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.735472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.735478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.735492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.745455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.745540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.745556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.745563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.745569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.745583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.755456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.755533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.755556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.755565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.755571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.755589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.765503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.765555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.765572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.765584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.765590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.765605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.775458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.775514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.775530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.775537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.775543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.775557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.785585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.785644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.785667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.785675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.785682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.714 [2024-06-10 11:38:24.785701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.714 qpair failed and we were unable to recover it. 00:31:27.714 [2024-06-10 11:38:24.795520] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.714 [2024-06-10 11:38:24.795583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.714 [2024-06-10 11:38:24.795600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.714 [2024-06-10 11:38:24.795607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.714 [2024-06-10 11:38:24.795613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.795627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.805590] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.805686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.805703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.805709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.805715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.805730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.815691] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.815750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.815765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.815772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.815778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.815792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.825675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.825773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.825788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.825795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.825801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.825815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.835680] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.835737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.835752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.835759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.835765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.835779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.845709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.845763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.845778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.845785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.845791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.845805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.855859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.855948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.855967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.855974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.855980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.855994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.865778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.865837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.865853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.865860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.865866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.865880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.875781] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.875843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.875858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.875865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.875871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.875885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.885827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.885898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.885913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.885920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.885926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.885940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.895739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.895793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.895808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.895815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.895860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.895880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.905895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.715 [2024-06-10 11:38:24.905952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.715 [2024-06-10 11:38:24.905967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.715 [2024-06-10 11:38:24.905974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.715 [2024-06-10 11:38:24.905980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.715 [2024-06-10 11:38:24.905994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.715 qpair failed and we were unable to recover it. 00:31:27.715 [2024-06-10 11:38:24.915920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.716 [2024-06-10 11:38:24.915979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.716 [2024-06-10 11:38:24.915994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.716 [2024-06-10 11:38:24.916001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.716 [2024-06-10 11:38:24.916007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.716 [2024-06-10 11:38:24.916020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.716 qpair failed and we were unable to recover it. 00:31:27.716 [2024-06-10 11:38:24.925937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.716 [2024-06-10 11:38:24.925989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.716 [2024-06-10 11:38:24.926004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.716 [2024-06-10 11:38:24.926011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.716 [2024-06-10 11:38:24.926017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.716 [2024-06-10 11:38:24.926031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.716 qpair failed and we were unable to recover it. 00:31:27.716 [2024-06-10 11:38:24.935937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.716 [2024-06-10 11:38:24.935999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.716 [2024-06-10 11:38:24.936015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.716 [2024-06-10 11:38:24.936022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.716 [2024-06-10 11:38:24.936028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.716 [2024-06-10 11:38:24.936041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.716 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.945967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.946038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.946056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.946063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.946069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.946083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.956132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.956192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.956207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.956214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.956220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.956234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.966068] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.966140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.966156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.966162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.966169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.966183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.976083] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.976132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.976148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.976155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.976161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.976175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.986169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.986285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.986301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.986308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.986320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.986333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:24.996159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:24.996222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:24.996237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:24.996244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:24.996250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:24.996263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:25.006212] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:25.006271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:25.006286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:25.006293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:25.006299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:25.006313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:25.016211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.978 [2024-06-10 11:38:25.016265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.978 [2024-06-10 11:38:25.016280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.978 [2024-06-10 11:38:25.016288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.978 [2024-06-10 11:38:25.016294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.978 [2024-06-10 11:38:25.016307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.978 qpair failed and we were unable to recover it. 00:31:27.978 [2024-06-10 11:38:25.026242] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.026338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.026354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.026361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.026367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.026381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.036276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.036347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.036363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.036370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.036378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.036394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.046298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.046397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.046414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.046421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.046427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.046440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.056390] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.056472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.056488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.056496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.056502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.056515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.066305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.066361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.066377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.066384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.066390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.066403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.076356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.076419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.076434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.076441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.076450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.076464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.086315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.086370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.086385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.086392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.086398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.086411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.096460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.096516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.096531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.096538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.096544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.096557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.106358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.106420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.106436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.106443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.106449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.106463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.116470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.116574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.116590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.116597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.116604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.116618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.126488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.126543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.126558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.126565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.126571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.979 [2024-06-10 11:38:25.126585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.979 qpair failed and we were unable to recover it. 00:31:27.979 [2024-06-10 11:38:25.136571] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.979 [2024-06-10 11:38:25.136626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.979 [2024-06-10 11:38:25.136642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.979 [2024-06-10 11:38:25.136649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.979 [2024-06-10 11:38:25.136656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.136669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.146595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.146652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.146666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.146673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.146680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.146693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.156655] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.156717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.156732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.156738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.156744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.156758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.166640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.166697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.166713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.166723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.166729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.166743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.176671] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.176726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.176741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.176748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.176754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.176768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.186719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.186776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.186792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.186799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.186805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.186819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:27.980 [2024-06-10 11:38:25.196778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.980 [2024-06-10 11:38:25.196854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.980 [2024-06-10 11:38:25.196870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.980 [2024-06-10 11:38:25.196877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.980 [2024-06-10 11:38:25.196883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:27.980 [2024-06-10 11:38:25.196896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.980 qpair failed and we were unable to recover it. 00:31:28.242 [2024-06-10 11:38:25.206709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.242 [2024-06-10 11:38:25.206766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.242 [2024-06-10 11:38:25.206781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.242 [2024-06-10 11:38:25.206788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.242 [2024-06-10 11:38:25.206794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.206808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.216765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.216819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.216838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.216845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.216850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.216865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.226814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.226872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.226889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.226896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.226902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.226916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.236836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.236896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.236911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.236918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.236924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.236937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.246845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.246940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.246955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.246962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.246968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.246982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.256902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.256956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.256974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.256981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.256987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.257001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.266923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.266982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.266997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.267004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.267010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.267024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.276993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.277067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.277082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.277090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.277096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.277110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.286988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.287098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.287113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.287120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.287126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.287139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.296944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.297015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.297030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.297037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.297043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.297061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.306948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.243 [2024-06-10 11:38:25.307003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.243 [2024-06-10 11:38:25.307018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.243 [2024-06-10 11:38:25.307025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.243 [2024-06-10 11:38:25.307032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.243 [2024-06-10 11:38:25.307046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.243 qpair failed and we were unable to recover it. 00:31:28.243 [2024-06-10 11:38:25.317073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.317132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.317147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.317154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.317160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.317174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.327094] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.327145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.327161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.327167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.327173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.327187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.337134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.337191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.337206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.337213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.337219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.337233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.347155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.347211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.347229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.347236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.347242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.347256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.357170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.357227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.357242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.357249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.357255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.357269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.367166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.367265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.367281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.367288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.367294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.367307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.377223] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.377281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.377296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.377303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.377309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.377322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.387239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.387302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.387317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.387324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.387333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.387347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.397219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.397276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.397292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.397299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.397305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.397319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.407314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.407370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.407385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.407392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.244 [2024-06-10 11:38:25.407399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.244 [2024-06-10 11:38:25.407412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.244 qpair failed and we were unable to recover it. 00:31:28.244 [2024-06-10 11:38:25.417363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.244 [2024-06-10 11:38:25.417430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.244 [2024-06-10 11:38:25.417445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.244 [2024-06-10 11:38:25.417452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.245 [2024-06-10 11:38:25.417459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.245 [2024-06-10 11:38:25.417472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.245 qpair failed and we were unable to recover it. 00:31:28.245 [2024-06-10 11:38:25.427356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.245 [2024-06-10 11:38:25.427472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.245 [2024-06-10 11:38:25.427487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.245 [2024-06-10 11:38:25.427494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.245 [2024-06-10 11:38:25.427500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.245 [2024-06-10 11:38:25.427514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.245 qpair failed and we were unable to recover it. 00:31:28.245 [2024-06-10 11:38:25.437400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.245 [2024-06-10 11:38:25.437511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.245 [2024-06-10 11:38:25.437527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.245 [2024-06-10 11:38:25.437533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.245 [2024-06-10 11:38:25.437540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.245 [2024-06-10 11:38:25.437553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.245 qpair failed and we were unable to recover it. 00:31:28.245 [2024-06-10 11:38:25.447429] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.245 [2024-06-10 11:38:25.447488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.245 [2024-06-10 11:38:25.447503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.245 [2024-06-10 11:38:25.447510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.245 [2024-06-10 11:38:25.447516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.245 [2024-06-10 11:38:25.447530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.245 qpair failed and we were unable to recover it. 00:31:28.245 [2024-06-10 11:38:25.457454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.245 [2024-06-10 11:38:25.457516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.245 [2024-06-10 11:38:25.457531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.245 [2024-06-10 11:38:25.457538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.245 [2024-06-10 11:38:25.457544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.245 [2024-06-10 11:38:25.457557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.245 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.467503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.467560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.467575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.467582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.467588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.467601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.477513] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.477573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.477588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.477595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.477604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.477618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.487531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.487587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.487601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.487608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.487614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.487628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.497601] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.497655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.497671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.497678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.497684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.497698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.507484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.507549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.507564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.507571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.507577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.507591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.517618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.517680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.517695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.517702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.517708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.517721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.527536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.527590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.527605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.527612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.527618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.527631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.537678] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.537743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.537758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.537764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.537770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.537784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.547584] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.547645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.547660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.547667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.547673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.547687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.557772] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.508 [2024-06-10 11:38:25.557856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.508 [2024-06-10 11:38:25.557872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.508 [2024-06-10 11:38:25.557879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.508 [2024-06-10 11:38:25.557885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.508 [2024-06-10 11:38:25.557899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.508 qpair failed and we were unable to recover it. 00:31:28.508 [2024-06-10 11:38:25.567652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.567711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.567726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.567736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.567742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.567755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.577766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.577818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.577837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.577844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.577850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.577864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.587785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.587847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.587862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.587869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.587875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.587889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.597787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.597858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.597874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.597880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.597886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.597900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.607851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.607901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.607916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.607922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.607928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.607942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.617785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.617860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.617875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.617882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.617890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.617904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.627940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.627993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.628008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.628016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.628022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.628035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.637926] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.509 [2024-06-10 11:38:25.637991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.509 [2024-06-10 11:38:25.638006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.509 [2024-06-10 11:38:25.638013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.509 [2024-06-10 11:38:25.638019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.509 [2024-06-10 11:38:25.638032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.509 qpair failed and we were unable to recover it. 00:31:28.509 [2024-06-10 11:38:25.647984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.648038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.648052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.648059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.648066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.648079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.658064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.658120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.658138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.658145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.658151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.658164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.668055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.668150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.668165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.668172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.668178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.668192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.678057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.678117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.678132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.678139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.678145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.678158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.687992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.688053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.688067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.688074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.688080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.688093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.698170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.698225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.698240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.698246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.698252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.698270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.708205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.708277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.708292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.708299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.708306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.708319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.718176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.718237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.510 [2024-06-10 11:38:25.718252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.510 [2024-06-10 11:38:25.718259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.510 [2024-06-10 11:38:25.718265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0600000b90 00:31:28.510 [2024-06-10 11:38:25.718278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.510 qpair failed and we were unable to recover it. 00:31:28.510 [2024-06-10 11:38:25.728204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.510 [2024-06-10 11:38:25.728256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.511 [2024-06-10 11:38:25.728276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.511 [2024-06-10 11:38:25.728282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.511 [2024-06-10 11:38:25.728288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f05f8000b90 00:31:28.511 [2024-06-10 11:38:25.728301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.511 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.738224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.773 [2024-06-10 11:38:25.738277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.773 [2024-06-10 11:38:25.738291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.773 [2024-06-10 11:38:25.738297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.773 [2024-06-10 11:38:25.738302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f05f8000b90 00:31:28.773 [2024-06-10 11:38:25.738313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.773 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.748288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.773 [2024-06-10 11:38:25.748419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.773 [2024-06-10 11:38:25.748493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.773 [2024-06-10 11:38:25.748519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.773 [2024-06-10 11:38:25.748541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1137770 00:31:28.773 [2024-06-10 11:38:25.748592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.773 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.758268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.773 [2024-06-10 11:38:25.758356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.773 [2024-06-10 11:38:25.758391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.773 [2024-06-10 11:38:25.758408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.773 [2024-06-10 11:38:25.758422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1137770 00:31:28.773 [2024-06-10 11:38:25.758452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.773 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.768298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.773 [2024-06-10 11:38:25.768391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.773 [2024-06-10 11:38:25.768456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.773 [2024-06-10 11:38:25.768482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.773 [2024-06-10 11:38:25.768502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f05f0000b90 00:31:28.773 [2024-06-10 11:38:25.768556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.773 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.778334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.773 [2024-06-10 11:38:25.778440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.773 [2024-06-10 11:38:25.778487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.773 [2024-06-10 11:38:25.778506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.773 [2024-06-10 11:38:25.778522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f05f0000b90 00:31:28.773 [2024-06-10 11:38:25.778560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.773 qpair failed and we were unable to recover it. 00:31:28.773 [2024-06-10 11:38:25.778632] nvme_ctrlr.c:4395:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:28.773 A controller has encountered a failure and is being reset. 00:31:28.773 [2024-06-10 11:38:25.778669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1135600 (9): Bad file descriptor 00:31:28.773 Controller properly reset. 00:31:28.773 Initializing NVMe Controllers 00:31:28.773 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:28.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:28.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:28.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:28.773 Initialization complete. Launching workers. 00:31:28.773 Starting thread on core 1 00:31:28.773 Starting thread on core 2 00:31:28.773 Starting thread on core 3 00:31:28.773 Starting thread on core 0 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:28.774 00:31:28.774 real 0m11.427s 00:31:28.774 user 0m21.868s 00:31:28.774 sys 0m3.561s 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:28.774 ************************************ 00:31:28.774 END TEST nvmf_target_disconnect_tc2 00:31:28.774 ************************************ 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.774 rmmod nvme_tcp 00:31:28.774 rmmod nvme_fabrics 00:31:28.774 rmmod nvme_keyring 00:31:28.774 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:29.036 11:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1735146 ']' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1735146 ']' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1735146' 00:31:29.036 killing process with pid 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1735146 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.036 11:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.582 11:38:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:31.582 00:31:31.582 real 0m22.262s 00:31:31.582 user 0m49.961s 00:31:31.582 sys 0m9.874s 00:31:31.582 11:38:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.582 11:38:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.582 ************************************ 00:31:31.582 END TEST nvmf_target_disconnect 00:31:31.582 ************************************ 00:31:31.582 11:38:28 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:31:31.582 11:38:28 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:31.582 11:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.582 11:38:28 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:31.582 00:31:31.582 real 23m45.388s 00:31:31.582 user 48m15.852s 00:31:31.582 sys 7m46.180s 00:31:31.582 11:38:28 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.582 11:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.582 ************************************ 00:31:31.582 END TEST nvmf_tcp 00:31:31.582 ************************************ 00:31:31.582 11:38:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:31:31.582 11:38:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.582 11:38:28 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:31.582 11:38:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:31.582 11:38:28 -- common/autotest_common.sh@10 -- # set +x 00:31:31.582 ************************************ 00:31:31.582 START TEST spdkcli_nvmf_tcp 00:31:31.582 ************************************ 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.582 * Looking for test storage... 00:31:31.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:31.582 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1736705 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1736705 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1736705 ']' 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:31.583 11:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.583 [2024-06-10 11:38:28.645979] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:31:31.583 [2024-06-10 11:38:28.646056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736705 ] 00:31:31.583 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.583 [2024-06-10 11:38:28.731768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:31.843 [2024-06-10 11:38:28.816397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.843 [2024-06-10 11:38:28.816402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.414 11:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:32.414 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:32.414 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:32.414 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:32.414 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:32.414 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:32.414 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:32.414 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.414 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.414 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:32.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:32.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:32.415 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:32.415 ' 00:31:34.961 [2024-06-10 11:38:31.889320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.902 [2024-06-10 11:38:33.053117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:38.441 [2024-06-10 11:38:35.191361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:39.822 [2024-06-10 11:38:37.028870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:41.733 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:41.733 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:41.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:41.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:41.733 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:41.733 11:38:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:41.992 11:38:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.992 11:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:41.992 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:41.992 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.992 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:41.992 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:41.992 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:41.992 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:41.992 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:41.992 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:41.992 ' 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:47.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:47.277 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:47.277 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:47.277 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1736705 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1736705 ']' 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1736705 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1736705 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1736705' 00:31:47.277 killing process with pid 1736705 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1736705 00:31:47.277 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1736705 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1736705 ']' 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1736705 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1736705 ']' 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1736705 00:31:47.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1736705) - No such process 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1736705 is not found' 00:31:47.538 Process with pid 1736705 is not found 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:47.538 00:31:47.538 real 0m16.181s 00:31:47.538 user 0m33.975s 00:31:47.538 sys 0m0.816s 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:47.538 11:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.538 ************************************ 00:31:47.538 END TEST spdkcli_nvmf_tcp 00:31:47.538 ************************************ 00:31:47.538 11:38:44 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:47.538 11:38:44 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:47.538 11:38:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:47.538 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:31:47.538 ************************************ 00:31:47.538 START TEST nvmf_identify_passthru 00:31:47.538 ************************************ 00:31:47.538 11:38:44 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:47.800 * Looking for test storage... 00:31:47.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.800 11:38:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.800 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.800 11:38:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.800 11:38:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.800 11:38:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.801 11:38:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.801 11:38:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.801 11:38:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.801 11:38:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:47.801 11:38:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.801 11:38:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.801 11:38:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:47.801 11:38:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:47.801 11:38:44 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:47.801 11:38:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:55.946 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.946 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:55.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:55.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:55.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:55.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:31:55.947 00:31:55.947 --- 10.0.0.2 ping statistics --- 00:31:55.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.947 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:31:55.947 00:31:55.947 --- 10.0.0.1 ping statistics --- 00:31:55.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.947 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:55.947 11:38:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:55.947 11:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:55.947 11:38:52 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:55.947 11:38:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:55.947 11:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:31:55.947 11:38:53 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:31:55.947 11:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:55.947 11:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:55.947 11:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:55.947 11:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:55.947 11:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:56.208 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.501 11:38:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512038S2P0BGN 00:32:01.501 11:38:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:01.501 11:38:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:01.501 11:38:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:01.501 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1745274 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:06.796 11:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1745274 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1745274 ']' 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:06.796 11:39:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.796 [2024-06-10 11:39:03.407083] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:32:06.796 [2024-06-10 11:39:03.407141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.796 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.796 [2024-06-10 11:39:03.497108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.796 [2024-06-10 11:39:03.562618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.796 [2024-06-10 11:39:03.562652] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.796 [2024-06-10 11:39:03.562659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.796 [2024-06-10 11:39:03.562665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.796 [2024-06-10 11:39:03.562670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.796 [2024-06-10 11:39:03.562790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.796 [2024-06-10 11:39:03.562938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.796 [2024-06-10 11:39:03.563155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.796 [2024-06-10 11:39:03.563157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.056 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:07.056 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:32:07.057 11:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:07.057 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.057 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.057 INFO: Log level set to 20 00:32:07.057 INFO: Requests: 00:32:07.057 { 00:32:07.057 "jsonrpc": "2.0", 00:32:07.057 "method": "nvmf_set_config", 00:32:07.057 "id": 1, 00:32:07.057 "params": { 00:32:07.057 "admin_cmd_passthru": { 00:32:07.057 "identify_ctrlr": true 00:32:07.057 } 00:32:07.057 } 00:32:07.057 } 00:32:07.057 00:32:07.057 INFO: response: 00:32:07.057 { 00:32:07.057 "jsonrpc": "2.0", 00:32:07.057 "id": 1, 00:32:07.057 "result": true 00:32:07.057 } 00:32:07.057 00:32:07.057 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.057 11:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:07.057 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.057 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.057 INFO: Setting log level to 20 00:32:07.057 INFO: Setting log level to 20 00:32:07.057 INFO: Log level set to 20 00:32:07.057 INFO: Log level set to 20 00:32:07.057 INFO: Requests: 00:32:07.057 { 00:32:07.057 "jsonrpc": "2.0", 00:32:07.057 "method": "framework_start_init", 00:32:07.057 "id": 1 00:32:07.057 } 00:32:07.057 00:32:07.057 INFO: Requests: 00:32:07.057 { 00:32:07.057 "jsonrpc": "2.0", 00:32:07.057 "method": "framework_start_init", 00:32:07.057 "id": 1 00:32:07.057 } 00:32:07.057 00:32:07.318 [2024-06-10 11:39:04.332548] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:07.318 INFO: response: 00:32:07.318 { 00:32:07.318 "jsonrpc": "2.0", 00:32:07.318 "id": 1, 00:32:07.318 "result": true 00:32:07.318 } 00:32:07.318 00:32:07.318 INFO: response: 00:32:07.318 { 00:32:07.318 "jsonrpc": "2.0", 00:32:07.318 "id": 1, 00:32:07.318 "result": true 00:32:07.318 } 00:32:07.318 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.318 11:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.318 INFO: Setting log level to 40 00:32:07.318 INFO: Setting log level to 40 00:32:07.318 INFO: Setting log level to 40 00:32:07.318 [2024-06-10 11:39:04.345768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.318 11:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.318 11:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.318 11:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.686 Nvme0n1 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.686 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.686 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.686 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.686 [2024-06-10 11:39:07.259944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.686 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.686 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.686 [ 00:32:10.686 { 00:32:10.686 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:10.686 "subtype": "Discovery", 00:32:10.686 "listen_addresses": [], 00:32:10.686 "allow_any_host": true, 00:32:10.686 "hosts": [] 00:32:10.686 }, 00:32:10.686 { 00:32:10.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.686 "subtype": "NVMe", 00:32:10.686 "listen_addresses": [ 00:32:10.686 { 00:32:10.686 "trtype": "TCP", 00:32:10.686 "adrfam": "IPv4", 00:32:10.686 "traddr": "10.0.0.2", 00:32:10.686 "trsvcid": "4420" 00:32:10.686 } 00:32:10.686 ], 00:32:10.686 "allow_any_host": true, 00:32:10.686 "hosts": [], 00:32:10.686 "serial_number": "SPDK00000000000001", 00:32:10.686 "model_number": "SPDK bdev Controller", 00:32:10.686 "max_namespaces": 1, 00:32:10.686 "min_cntlid": 1, 00:32:10.686 "max_cntlid": 65519, 00:32:10.686 "namespaces": [ 00:32:10.686 { 00:32:10.686 "nsid": 1, 00:32:10.686 "bdev_name": "Nvme0n1", 00:32:10.686 "name": "Nvme0n1", 00:32:10.686 "nguid": "7312EA4AA8284F6C92F3A14211328CE6", 00:32:10.686 "uuid": "7312ea4a-a828-4f6c-92f3-a14211328ce6" 00:32:10.686 } 00:32:10.686 ] 00:32:10.686 } 00:32:10.687 ] 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512038S2P0BGN 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9512038S2P0BGN '!=' PHLJ9512038S2P0BGN ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:10.687 11:39:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.687 rmmod nvme_tcp 00:32:10.687 rmmod nvme_fabrics 00:32:10.687 rmmod nvme_keyring 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1745274 ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1745274 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1745274 ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1745274 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1745274 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1745274' 00:32:10.687 killing process with pid 1745274 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1745274 00:32:10.687 11:39:07 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1745274 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:13.228 11:39:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.228 11:39:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:13.228 11:39:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.143 11:39:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:15.143 00:32:15.143 real 0m27.581s 00:32:15.143 user 0m36.636s 00:32:15.143 sys 0m7.004s 00:32:15.143 11:39:12 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:15.143 11:39:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:15.143 ************************************ 00:32:15.143 END TEST nvmf_identify_passthru 00:32:15.143 ************************************ 00:32:15.143 11:39:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:15.143 11:39:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:15.143 11:39:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:15.143 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:32:15.143 ************************************ 00:32:15.143 START TEST nvmf_dif 00:32:15.143 ************************************ 00:32:15.143 11:39:12 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:15.406 * Looking for test storage... 00:32:15.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.406 11:39:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.406 11:39:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.406 11:39:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.406 11:39:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.406 11:39:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.406 11:39:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.406 11:39:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.406 11:39:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:15.406 11:39:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:15.406 11:39:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:15.407 11:39:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:15.407 11:39:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:15.407 11:39:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:15.407 11:39:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:15.407 11:39:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.407 11:39:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.407 11:39:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:15.407 11:39:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:32:15.407 11:39:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:23.553 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:23.553 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.553 11:39:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:23.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:23.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:32:23.554 00:32:23.554 --- 10.0.0.2 ping statistics --- 00:32:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.554 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:23.554 00:32:23.554 --- 10.0.0.1 ping statistics --- 00:32:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.554 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:23.554 11:39:20 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:27.759 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:27.759 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:27.759 11:39:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:27.759 11:39:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1752702 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1752702 00:32:27.759 11:39:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1752702 ']' 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:27.759 11:39:24 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.760 11:39:24 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:27.760 11:39:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:27.760 [2024-06-10 11:39:24.351554] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:32:27.760 [2024-06-10 11:39:24.351618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.760 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.760 [2024-06-10 11:39:24.428656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.760 [2024-06-10 11:39:24.519995] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.760 [2024-06-10 11:39:24.520056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.760 [2024-06-10 11:39:24.520064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.760 [2024-06-10 11:39:24.520070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.760 [2024-06-10 11:39:24.520076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.760 [2024-06-10 11:39:24.520108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.022 11:39:25 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:28.022 11:39:25 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:32:28.022 11:39:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.022 11:39:25 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:28.022 11:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 11:39:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.284 11:39:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:28.284 11:39:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 [2024-06-10 11:39:25.286251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.284 11:39:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 ************************************ 00:32:28.284 START TEST fio_dif_1_default 00:32:28.284 ************************************ 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 bdev_null0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 [2024-06-10 11:39:25.378656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:28.284 { 00:32:28.284 "params": { 00:32:28.284 "name": "Nvme$subsystem", 00:32:28.284 "trtype": "$TEST_TRANSPORT", 00:32:28.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.284 "adrfam": "ipv4", 00:32:28.284 "trsvcid": "$NVMF_PORT", 00:32:28.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.284 "hdgst": ${hdgst:-false}, 00:32:28.284 "ddgst": ${ddgst:-false} 00:32:28.284 }, 00:32:28.284 "method": "bdev_nvme_attach_controller" 00:32:28.284 } 00:32:28.284 EOF 00:32:28.284 )") 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:28.284 "params": { 00:32:28.284 "name": "Nvme0", 00:32:28.284 "trtype": "tcp", 00:32:28.284 "traddr": "10.0.0.2", 00:32:28.284 "adrfam": "ipv4", 00:32:28.284 "trsvcid": "4420", 00:32:28.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.284 "hdgst": false, 00:32:28.284 "ddgst": false 00:32:28.284 }, 00:32:28.284 "method": "bdev_nvme_attach_controller" 00:32:28.284 }' 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.284 11:39:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.545 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:28.545 fio-3.35 00:32:28.545 Starting 1 thread 00:32:28.806 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.038 00:32:41.038 filename0: (groupid=0, jobs=1): err= 0: pid=1753190: Mon Jun 10 11:39:36 2024 00:32:41.038 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10030msec) 00:32:41.038 slat (nsec): min=7240, max=55076, avg=7557.06, stdev=2007.85 00:32:41.038 clat (usec): min=40834, max=42323, avg=41422.75, stdev=491.78 00:32:41.038 lat (usec): min=40842, max=42365, avg=41430.31, stdev=491.93 00:32:41.038 clat percentiles (usec): 00:32:41.038 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:41.038 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:32:41.038 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:41.038 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:41.038 | 99.99th=[42206] 00:32:41.038 bw ( KiB/s): min= 352, max= 416, per=99.73%, avg=385.60, stdev=12.61, samples=20 00:32:41.038 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:32:41.038 lat (msec) : 50=100.00% 00:32:41.038 cpu : usr=95.79%, sys=3.97%, ctx=14, majf=0, minf=246 00:32:41.038 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.038 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.038 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:41.038 00:32:41.038 Run status group 0 (all jobs): 00:32:41.038 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3872KiB (3965kB), run=10030-10030msec 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 00:32:41.038 real 0m11.105s 00:32:41.038 user 0m16.777s 00:32:41.038 sys 0m0.748s 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 ************************************ 00:32:41.038 END TEST fio_dif_1_default 00:32:41.038 ************************************ 00:32:41.038 11:39:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:41.038 11:39:36 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:41.038 11:39:36 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 ************************************ 00:32:41.038 START TEST fio_dif_1_multi_subsystems 00:32:41.038 ************************************ 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 bdev_null0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 [2024-06-10 11:39:36.561642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 bdev_null1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.038 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:41.039 { 00:32:41.039 "params": { 00:32:41.039 "name": "Nvme$subsystem", 00:32:41.039 "trtype": "$TEST_TRANSPORT", 00:32:41.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.039 "adrfam": "ipv4", 00:32:41.039 "trsvcid": "$NVMF_PORT", 00:32:41.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.039 "hdgst": ${hdgst:-false}, 00:32:41.039 "ddgst": ${ddgst:-false} 00:32:41.039 }, 00:32:41.039 "method": "bdev_nvme_attach_controller" 00:32:41.039 } 00:32:41.039 EOF 00:32:41.039 )") 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:41.039 { 00:32:41.039 "params": { 00:32:41.039 "name": "Nvme$subsystem", 00:32:41.039 "trtype": "$TEST_TRANSPORT", 00:32:41.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.039 "adrfam": "ipv4", 00:32:41.039 "trsvcid": "$NVMF_PORT", 00:32:41.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.039 "hdgst": ${hdgst:-false}, 00:32:41.039 "ddgst": ${ddgst:-false} 00:32:41.039 }, 00:32:41.039 "method": "bdev_nvme_attach_controller" 00:32:41.039 } 00:32:41.039 EOF 00:32:41.039 )") 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:41.039 "params": { 00:32:41.039 "name": "Nvme0", 00:32:41.039 "trtype": "tcp", 00:32:41.039 "traddr": "10.0.0.2", 00:32:41.039 "adrfam": "ipv4", 00:32:41.039 "trsvcid": "4420", 00:32:41.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:41.039 "hdgst": false, 00:32:41.039 "ddgst": false 00:32:41.039 }, 00:32:41.039 "method": "bdev_nvme_attach_controller" 00:32:41.039 },{ 00:32:41.039 "params": { 00:32:41.039 "name": "Nvme1", 00:32:41.039 "trtype": "tcp", 00:32:41.039 "traddr": "10.0.0.2", 00:32:41.039 "adrfam": "ipv4", 00:32:41.039 "trsvcid": "4420", 00:32:41.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.039 "hdgst": false, 00:32:41.039 "ddgst": false 00:32:41.039 }, 00:32:41.039 "method": "bdev_nvme_attach_controller" 00:32:41.039 }' 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:41.039 11:39:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.039 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:41.039 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:41.039 fio-3.35 00:32:41.039 Starting 2 threads 00:32:41.039 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.112 00:32:51.112 filename0: (groupid=0, jobs=1): err= 0: pid=1755177: Mon Jun 10 11:39:47 2024 00:32:51.112 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10027msec) 00:32:51.112 slat (nsec): min=2867, max=60502, avg=6477.41, stdev=2239.52 00:32:51.112 clat (usec): min=722, max=49098, avg=21223.55, stdev=20221.66 00:32:51.112 lat (usec): min=727, max=49113, avg=21230.03, stdev=20221.47 00:32:51.112 clat percentiles (usec): 00:32:51.112 | 1.00th=[ 758], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:32:51.112 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:32:51.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:32:51.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:32:51.112 | 99.99th=[49021] 00:32:51.112 bw ( KiB/s): min= 672, max= 768, per=66.16%, avg=753.60, stdev=30.22, samples=20 00:32:51.112 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:32:51.112 lat (usec) : 750=0.64%, 1000=47.25% 00:32:51.112 lat (msec) : 2=1.91%, 50=50.21% 00:32:51.112 cpu : usr=97.65%, sys=2.13%, ctx=22, majf=0, minf=183 00:32:51.112 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.112 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.112 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:51.112 filename1: (groupid=0, jobs=1): err= 0: pid=1755178: Mon Jun 10 11:39:47 2024 00:32:51.112 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10038msec) 00:32:51.112 slat (nsec): min=7222, max=39865, avg=7760.44, stdev=1769.11 00:32:51.112 clat (usec): min=40848, max=45859, avg=41457.44, stdev=563.29 00:32:51.112 lat (usec): min=40856, max=45893, avg=41465.20, stdev=563.68 00:32:51.112 clat percentiles (usec): 00:32:51.112 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:51.112 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:32:51.112 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:51.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:32:51.112 | 99.99th=[45876] 00:32:51.112 bw ( KiB/s): min= 352, max= 416, per=33.83%, avg=385.60, stdev=12.61, samples=20 00:32:51.112 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:32:51.112 lat (msec) : 50=100.00% 00:32:51.112 cpu : usr=97.79%, sys=1.99%, ctx=12, majf=0, minf=184 00:32:51.112 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.112 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.112 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:51.112 00:32:51.112 Run status group 0 (all jobs): 00:32:51.112 READ: bw=1138KiB/s (1165kB/s), 386KiB/s-753KiB/s (395kB/s-771kB/s), io=11.2MiB (11.7MB), run=10027-10038msec 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.112 00:32:51.112 real 0m11.423s 00:32:51.112 user 0m30.198s 00:32:51.112 sys 0m0.722s 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 ************************************ 00:32:51.112 END TEST fio_dif_1_multi_subsystems 00:32:51.112 ************************************ 00:32:51.112 11:39:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:51.112 11:39:47 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:51.112 11:39:47 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:51.112 11:39:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 ************************************ 00:32:51.112 START TEST fio_dif_rand_params 00:32:51.112 ************************************ 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:51.112 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:51.113 bdev_null0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:51.113 [2024-06-10 11:39:48.064828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:51.113 { 00:32:51.113 "params": { 00:32:51.113 "name": "Nvme$subsystem", 00:32:51.113 "trtype": "$TEST_TRANSPORT", 00:32:51.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.113 "adrfam": "ipv4", 00:32:51.113 "trsvcid": "$NVMF_PORT", 00:32:51.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.113 "hdgst": ${hdgst:-false}, 00:32:51.113 "ddgst": ${ddgst:-false} 00:32:51.113 }, 00:32:51.113 "method": "bdev_nvme_attach_controller" 00:32:51.113 } 00:32:51.113 EOF 00:32:51.113 )") 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:51.113 "params": { 00:32:51.113 "name": "Nvme0", 00:32:51.113 "trtype": "tcp", 00:32:51.113 "traddr": "10.0.0.2", 00:32:51.113 "adrfam": "ipv4", 00:32:51.113 "trsvcid": "4420", 00:32:51.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:51.113 "hdgst": false, 00:32:51.113 "ddgst": false 00:32:51.113 }, 00:32:51.113 "method": "bdev_nvme_attach_controller" 00:32:51.113 }' 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:51.113 11:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:51.373 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:51.373 ... 00:32:51.373 fio-3.35 00:32:51.373 Starting 3 threads 00:32:51.373 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.951 00:32:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=1757172: Mon Jun 10 11:39:54 2024 00:32:57.951 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5004msec) 00:32:57.951 slat (nsec): min=7269, max=51605, avg=8227.07, stdev=1705.48 00:32:57.951 clat (usec): min=4168, max=53651, avg=11274.67, stdev=11263.10 00:32:57.951 lat (usec): min=4177, max=53658, avg=11282.89, stdev=11263.10 00:32:57.951 clat percentiles (usec): 00:32:57.951 | 1.00th=[ 4752], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6521], 00:32:57.951 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8717], 00:32:57.951 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[48497], 00:32:57.951 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:32:57.951 | 99.99th=[53740] 00:32:57.951 bw ( KiB/s): min=16128, max=50688, per=37.17%, avg=33971.20, stdev=9090.34, samples=10 00:32:57.951 iops : min= 126, max= 396, avg=265.40, stdev=71.02, samples=10 00:32:57.951 lat (msec) : 10=82.11%, 20=10.00%, 50=4.96%, 100=2.93% 00:32:57.951 cpu : usr=96.62%, sys=3.10%, ctx=14, majf=0, minf=77 00:32:57.951 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 issued rwts: total=1330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=1757173: Mon Jun 10 11:39:54 2024 00:32:57.951 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(136MiB/5005msec) 00:32:57.951 slat (nsec): min=7267, max=33859, avg=8161.46, stdev=1660.41 00:32:57.951 clat (usec): min=4807, max=91080, avg=13798.99, stdev=11752.74 00:32:57.951 lat (usec): min=4814, max=91088, avg=13807.16, stdev=11752.72 00:32:57.951 clat percentiles (usec): 00:32:57.951 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 8225], 00:32:57.951 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11076], 00:32:57.951 | 70.00th=[11994], 80.00th=[13304], 90.00th=[15270], 95.00th=[49546], 00:32:57.951 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54789], 99.95th=[90702], 00:32:57.951 | 99.99th=[90702] 00:32:57.951 bw ( KiB/s): min=18432, max=36352, per=30.36%, avg=27750.40, stdev=4824.47, samples=10 00:32:57.951 iops : min= 144, max= 284, avg=216.80, stdev=37.69, samples=10 00:32:57.951 lat (msec) : 10=44.34%, 20=46.64%, 50=4.60%, 100=4.42% 00:32:57.951 cpu : usr=96.68%, sys=3.06%, ctx=13, majf=0, minf=215 00:32:57.951 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=1757174: Mon Jun 10 11:39:54 2024 00:32:57.951 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5003msec) 00:32:57.951 slat (nsec): min=7276, max=31554, avg=8567.76, stdev=1509.38 00:32:57.951 clat (usec): min=5591, max=90872, avg=12959.80, stdev=9640.17 00:32:57.951 lat (usec): min=5599, max=90883, avg=12968.37, stdev=9640.44 00:32:57.951 clat percentiles (usec): 00:32:57.951 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 8160], 00:32:57.951 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:32:57.951 | 70.00th=[12649], 80.00th=[14091], 90.00th=[15533], 95.00th=[47973], 00:32:57.951 | 99.00th=[51643], 99.50th=[54789], 99.90th=[55837], 99.95th=[90702], 00:32:57.951 | 99.99th=[90702] 00:32:57.951 bw ( KiB/s): min=21760, max=35584, per=32.35%, avg=29568.00, stdev=4496.43, samples=10 00:32:57.951 iops : min= 170, max= 278, avg=231.00, stdev=35.13, samples=10 00:32:57.951 lat (msec) : 10=39.33%, 20=55.06%, 50=3.72%, 100=1.90% 00:32:57.951 cpu : usr=96.18%, sys=3.56%, ctx=9, majf=0, minf=123 00:32:57.951 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.951 issued rwts: total=1157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.951 00:32:57.951 Run status group 0 (all jobs): 00:32:57.951 READ: bw=89.3MiB/s (93.6MB/s), 27.1MiB/s-33.2MiB/s (28.5MB/s-34.8MB/s), io=447MiB (468MB), run=5003-5005msec 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:57.951 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 bdev_null0 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 [2024-06-10 11:39:54.229053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 bdev_null1 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 bdev_null2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:57.952 { 00:32:57.952 "params": { 00:32:57.952 "name": "Nvme$subsystem", 00:32:57.952 "trtype": "$TEST_TRANSPORT", 00:32:57.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.952 "adrfam": "ipv4", 00:32:57.952 "trsvcid": "$NVMF_PORT", 00:32:57.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.952 "hdgst": ${hdgst:-false}, 00:32:57.952 "ddgst": ${ddgst:-false} 00:32:57.952 }, 00:32:57.952 "method": "bdev_nvme_attach_controller" 00:32:57.952 } 00:32:57.952 EOF 00:32:57.952 )") 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:57.952 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:57.953 { 00:32:57.953 "params": { 00:32:57.953 "name": "Nvme$subsystem", 00:32:57.953 "trtype": "$TEST_TRANSPORT", 00:32:57.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.953 "adrfam": "ipv4", 00:32:57.953 "trsvcid": "$NVMF_PORT", 00:32:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.953 "hdgst": ${hdgst:-false}, 00:32:57.953 "ddgst": ${ddgst:-false} 00:32:57.953 }, 00:32:57.953 "method": "bdev_nvme_attach_controller" 00:32:57.953 } 00:32:57.953 EOF 00:32:57.953 )") 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:57.953 { 00:32:57.953 "params": { 00:32:57.953 "name": "Nvme$subsystem", 00:32:57.953 "trtype": "$TEST_TRANSPORT", 00:32:57.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.953 "adrfam": "ipv4", 00:32:57.953 "trsvcid": "$NVMF_PORT", 00:32:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.953 "hdgst": ${hdgst:-false}, 00:32:57.953 "ddgst": ${ddgst:-false} 00:32:57.953 }, 00:32:57.953 "method": "bdev_nvme_attach_controller" 00:32:57.953 } 00:32:57.953 EOF 00:32:57.953 )") 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:57.953 "params": { 00:32:57.953 "name": "Nvme0", 00:32:57.953 "trtype": "tcp", 00:32:57.953 "traddr": "10.0.0.2", 00:32:57.953 "adrfam": "ipv4", 00:32:57.953 "trsvcid": "4420", 00:32:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.953 "hdgst": false, 00:32:57.953 "ddgst": false 00:32:57.953 }, 00:32:57.953 "method": "bdev_nvme_attach_controller" 00:32:57.953 },{ 00:32:57.953 "params": { 00:32:57.953 "name": "Nvme1", 00:32:57.953 "trtype": "tcp", 00:32:57.953 "traddr": "10.0.0.2", 00:32:57.953 "adrfam": "ipv4", 00:32:57.953 "trsvcid": "4420", 00:32:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.953 "hdgst": false, 00:32:57.953 "ddgst": false 00:32:57.953 }, 00:32:57.953 "method": "bdev_nvme_attach_controller" 00:32:57.953 },{ 00:32:57.953 "params": { 00:32:57.953 "name": "Nvme2", 00:32:57.953 "trtype": "tcp", 00:32:57.953 "traddr": "10.0.0.2", 00:32:57.953 "adrfam": "ipv4", 00:32:57.953 "trsvcid": "4420", 00:32:57.953 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:57.953 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:57.953 "hdgst": false, 00:32:57.953 "ddgst": false 00:32:57.953 }, 00:32:57.953 "method": "bdev_nvme_attach_controller" 00:32:57.953 }' 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:57.953 11:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.953 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.953 ... 00:32:57.953 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.953 ... 00:32:57.953 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.953 ... 00:32:57.953 fio-3.35 00:32:57.953 Starting 24 threads 00:32:57.953 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.181 00:33:10.181 filename0: (groupid=0, jobs=1): err= 0: pid=1758274: Mon Jun 10 11:40:05 2024 00:33:10.181 read: IOPS=545, BW=2182KiB/s (2234kB/s)(21.3MiB/10003msec) 00:33:10.181 slat (nsec): min=7327, max=81241, avg=14762.06, stdev=10743.28 00:33:10.181 clat (usec): min=4823, max=31204, avg=29215.82, stdev=2285.23 00:33:10.181 lat (usec): min=4837, max=31213, avg=29230.59, stdev=2284.98 00:33:10.181 clat percentiles (usec): 00:33:10.181 | 1.00th=[20579], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.181 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.181 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.181 | 99.00th=[30016], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:33:10.181 | 99.99th=[31327] 00:33:10.181 bw ( KiB/s): min= 2048, max= 2432, per=4.20%, avg=2182.74, stdev=67.11, samples=19 00:33:10.181 iops : min= 512, max= 608, avg=545.68, stdev=16.78, samples=19 00:33:10.181 lat (msec) : 10=0.88%, 50=99.12% 00:33:10.181 cpu : usr=99.23%, sys=0.47%, ctx=11, majf=0, minf=56 00:33:10.181 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.181 filename0: (groupid=0, jobs=1): err= 0: pid=1758275: Mon Jun 10 11:40:05 2024 00:33:10.181 read: IOPS=547, BW=2188KiB/s (2241kB/s)(21.4MiB/10003msec) 00:33:10.181 slat (nsec): min=3759, max=64328, avg=14188.65, stdev=8746.32 00:33:10.181 clat (usec): min=3885, max=31450, avg=29137.11, stdev=2621.51 00:33:10.181 lat (usec): min=3896, max=31461, avg=29151.30, stdev=2621.88 00:33:10.181 clat percentiles (usec): 00:33:10.181 | 1.00th=[10421], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.181 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.181 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[30016], 00:33:10.181 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31327], 99.95th=[31327], 00:33:10.181 | 99.99th=[31327] 00:33:10.181 bw ( KiB/s): min= 2048, max= 2554, per=4.21%, avg=2189.37, stdev=93.06, samples=19 00:33:10.181 iops : min= 512, max= 638, avg=547.32, stdev=23.16, samples=19 00:33:10.181 lat (msec) : 4=0.13%, 10=0.75%, 20=0.55%, 50=98.57% 00:33:10.181 cpu : usr=98.13%, sys=1.08%, ctx=67, majf=0, minf=131 00:33:10.181 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:10.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.181 filename0: (groupid=0, jobs=1): err= 0: pid=1758276: Mon Jun 10 11:40:05 2024 00:33:10.181 read: IOPS=546, BW=2188KiB/s (2240kB/s)(21.4MiB/10005msec) 00:33:10.181 slat (nsec): min=3018, max=45621, avg=11305.15, stdev=5466.11 00:33:10.181 clat (usec): min=1677, max=31530, avg=29157.80, stdev=2591.51 00:33:10.181 lat (usec): min=1683, max=31546, avg=29169.11, stdev=2591.78 00:33:10.181 clat percentiles (usec): 00:33:10.181 | 1.00th=[12649], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.181 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.181 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[30016], 00:33:10.181 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:33:10.181 | 99.99th=[31589] 00:33:10.181 bw ( KiB/s): min= 2048, max= 2554, per=4.21%, avg=2188.89, stdev=93.13, samples=19 00:33:10.181 iops : min= 512, max= 638, avg=547.16, stdev=23.18, samples=19 00:33:10.181 lat (msec) : 2=0.04%, 4=0.26%, 10=0.58%, 20=0.58%, 50=98.54% 00:33:10.181 cpu : usr=98.99%, sys=0.74%, ctx=17, majf=0, minf=86 00:33:10.181 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:10.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.181 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.181 filename0: (groupid=0, jobs=1): err= 0: pid=1758278: Mon Jun 10 11:40:05 2024 00:33:10.181 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.1MiB/10007msec) 00:33:10.181 slat (nsec): min=7327, max=88937, avg=26052.08, stdev=15760.11 00:33:10.181 clat (usec): min=22813, max=49335, avg=29342.85, stdev=1170.43 00:33:10.181 lat (usec): min=22823, max=49368, avg=29368.90, stdev=1170.80 00:33:10.181 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[30802], 99.90th=[49021], 99.95th=[49546], 00:33:10.182 | 99.99th=[49546] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.74, stdev=39.73, samples=19 00:33:10.182 iops : min= 512, max= 544, avg=540.68, stdev= 9.93, samples=19 00:33:10.182 lat (msec) : 50=100.00% 00:33:10.182 cpu : usr=98.04%, sys=1.21%, ctx=81, majf=0, minf=56 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename0: (groupid=0, jobs=1): err= 0: pid=1758279: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10010msec) 00:33:10.182 slat (nsec): min=4513, max=82387, avg=21667.13, stdev=16220.15 00:33:10.182 clat (usec): min=22913, max=52926, avg=29446.50, stdev=1356.33 00:33:10.182 lat (usec): min=22922, max=52940, avg=29468.17, stdev=1355.10 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[30540], 99.90th=[52691], 99.95th=[52691], 00:33:10.182 | 99.99th=[52691] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.182 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.182 lat (msec) : 50=99.70%, 100=0.30% 00:33:10.182 cpu : usr=98.19%, sys=1.05%, ctx=46, majf=0, minf=54 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename0: (groupid=0, jobs=1): err= 0: pid=1758280: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10009msec) 00:33:10.182 slat (nsec): min=5536, max=88868, avg=27304.64, stdev=16180.19 00:33:10.182 clat (usec): min=22773, max=51605, avg=29353.49, stdev=1290.04 00:33:10.182 lat (usec): min=22781, max=51620, avg=29380.79, stdev=1289.63 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[30802], 99.90th=[51643], 99.95th=[51643], 00:33:10.182 | 99.99th=[51643] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2156.00, stdev=47.46, samples=19 00:33:10.182 iops : min= 512, max= 544, avg=539.00, stdev=11.86, samples=19 00:33:10.182 lat (msec) : 50=99.70%, 100=0.30% 00:33:10.182 cpu : usr=99.05%, sys=0.62%, ctx=56, majf=0, minf=52 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename0: (groupid=0, jobs=1): err= 0: pid=1758281: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10016msec) 00:33:10.182 slat (nsec): min=7267, max=99499, avg=22299.72, stdev=16248.23 00:33:10.182 clat (usec): min=20489, max=33417, avg=29377.79, stdev=636.80 00:33:10.182 lat (usec): min=20499, max=33446, avg=29400.09, stdev=635.15 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:33:10.182 | 99.99th=[33424] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2163.20, stdev=39.40, samples=20 00:33:10.182 iops : min= 512, max= 544, avg=540.80, stdev= 9.85, samples=20 00:33:10.182 lat (msec) : 50=100.00% 00:33:10.182 cpu : usr=99.04%, sys=0.58%, ctx=43, majf=0, minf=58 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename0: (groupid=0, jobs=1): err= 0: pid=1758282: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.1MiB/10004msec) 00:33:10.182 slat (usec): min=7, max=101, avg=27.36, stdev=14.27 00:33:10.182 clat (usec): min=11781, max=58318, avg=29373.69, stdev=1863.47 00:33:10.182 lat (usec): min=11790, max=58340, avg=29401.05, stdev=1862.85 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28181], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[30802], 99.90th=[58459], 99.95th=[58459], 00:33:10.182 | 99.99th=[58459] 00:33:10.182 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2155.26, stdev=48.48, samples=19 00:33:10.182 iops : min= 510, max= 544, avg=538.74, stdev=12.21, samples=19 00:33:10.182 lat (msec) : 20=0.30%, 50=99.41%, 100=0.30% 00:33:10.182 cpu : usr=99.06%, sys=0.61%, ctx=30, majf=0, minf=76 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename1: (groupid=0, jobs=1): err= 0: pid=1758283: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=542, BW=2168KiB/s (2220kB/s)(21.2MiB/10007msec) 00:33:10.182 slat (nsec): min=7282, max=55169, avg=16805.11, stdev=9165.79 00:33:10.182 clat (usec): min=13778, max=31750, avg=29364.94, stdev=922.91 00:33:10.182 lat (usec): min=13789, max=31784, avg=29381.74, stdev=923.14 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[30016], 00:33:10.182 | 99.00th=[30278], 99.50th=[31327], 99.90th=[31589], 99.95th=[31851], 00:33:10.182 | 99.99th=[31851] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.18%, avg=2169.26, stdev=29.37, samples=19 00:33:10.182 iops : min= 512, max= 544, avg=542.32, stdev= 7.34, samples=19 00:33:10.182 lat (msec) : 20=0.29%, 50=99.71% 00:33:10.182 cpu : usr=98.33%, sys=0.86%, ctx=80, majf=0, minf=67 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename1: (groupid=0, jobs=1): err= 0: pid=1758284: Mon Jun 10 11:40:05 2024 00:33:10.182 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10008msec) 00:33:10.182 slat (nsec): min=6192, max=87045, avg=23684.86, stdev=17551.81 00:33:10.182 clat (usec): min=22712, max=50243, avg=29422.38, stdev=1223.50 00:33:10.182 lat (usec): min=22727, max=50261, avg=29446.06, stdev=1222.01 00:33:10.182 clat percentiles (usec): 00:33:10.182 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.182 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:33:10.182 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.182 | 99.00th=[30016], 99.50th=[31065], 99.90th=[50070], 99.95th=[50070], 00:33:10.182 | 99.99th=[50070] 00:33:10.182 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.53, stdev=40.36, samples=19 00:33:10.182 iops : min= 512, max= 544, avg=540.63, stdev=10.09, samples=19 00:33:10.182 lat (msec) : 50=99.70%, 100=0.30% 00:33:10.182 cpu : usr=99.15%, sys=0.54%, ctx=11, majf=0, minf=59 00:33:10.182 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.182 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.182 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.182 filename1: (groupid=0, jobs=1): err= 0: pid=1758285: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10004msec) 00:33:10.183 slat (nsec): min=6236, max=91103, avg=30110.10, stdev=16615.72 00:33:10.183 clat (usec): min=7975, max=50987, avg=29212.52, stdev=1926.23 00:33:10.183 lat (usec): min=7983, max=51008, avg=29242.63, stdev=1926.89 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[28181], 5.00th=[28705], 10.00th=[28967], 20.00th=[28967], 00:33:10.183 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.183 | 99.00th=[30016], 99.50th=[30802], 99.90th=[51119], 99.95th=[51119], 00:33:10.183 | 99.99th=[51119] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.183 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.183 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.183 cpu : usr=98.15%, sys=1.08%, ctx=83, majf=0, minf=50 00:33:10.183 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename1: (groupid=0, jobs=1): err= 0: pid=1758286: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10004msec) 00:33:10.183 slat (usec): min=5, max=103, avg=32.12, stdev=16.93 00:33:10.183 clat (usec): min=7709, max=50658, avg=29203.36, stdev=1923.11 00:33:10.183 lat (usec): min=7717, max=50675, avg=29235.47, stdev=1924.08 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[28181], 5.00th=[28705], 10.00th=[28967], 20.00th=[28967], 00:33:10.183 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.183 | 99.00th=[30016], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:33:10.183 | 99.99th=[50594] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.183 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.183 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.183 cpu : usr=99.31%, sys=0.40%, ctx=9, majf=0, minf=48 00:33:10.183 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename1: (groupid=0, jobs=1): err= 0: pid=1758287: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10005msec) 00:33:10.183 slat (nsec): min=6466, max=94873, avg=28913.61, stdev=15776.16 00:33:10.183 clat (usec): min=7284, max=51604, avg=29240.27, stdev=1964.88 00:33:10.183 lat (usec): min=7294, max=51621, avg=29269.18, stdev=1965.45 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[28181], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.183 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.183 | 99.00th=[30016], 99.50th=[30802], 99.90th=[51643], 99.95th=[51643], 00:33:10.183 | 99.99th=[51643] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2156.00, stdev=47.46, samples=19 00:33:10.183 iops : min= 512, max= 544, avg=539.00, stdev=11.86, samples=19 00:33:10.183 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.183 cpu : usr=98.86%, sys=0.76%, ctx=34, majf=0, minf=75 00:33:10.183 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename1: (groupid=0, jobs=1): err= 0: pid=1758288: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10011msec) 00:33:10.183 slat (nsec): min=7425, max=85690, avg=24639.76, stdev=15839.22 00:33:10.183 clat (usec): min=24952, max=48478, avg=29421.37, stdev=1091.64 00:33:10.183 lat (usec): min=24978, max=48500, avg=29446.01, stdev=1090.43 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.183 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.183 | 99.00th=[30016], 99.50th=[31065], 99.90th=[48497], 99.95th=[48497], 00:33:10.183 | 99.99th=[48497] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.74, stdev=39.73, samples=19 00:33:10.183 iops : min= 512, max= 544, avg=540.68, stdev= 9.93, samples=19 00:33:10.183 lat (msec) : 50=100.00% 00:33:10.183 cpu : usr=98.03%, sys=1.14%, ctx=126, majf=0, minf=61 00:33:10.183 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename1: (groupid=0, jobs=1): err= 0: pid=1758289: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.1MiB/10007msec) 00:33:10.183 slat (nsec): min=7527, max=79532, avg=24065.53, stdev=14591.12 00:33:10.183 clat (usec): min=15824, max=63301, avg=29374.66, stdev=1320.85 00:33:10.183 lat (usec): min=15832, max=63321, avg=29398.73, stdev=1320.82 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.183 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29492], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.183 | 99.00th=[30016], 99.50th=[30802], 99.90th=[49546], 99.95th=[49546], 00:33:10.183 | 99.99th=[63177] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.74, stdev=39.73, samples=19 00:33:10.183 iops : min= 512, max= 544, avg=540.68, stdev= 9.93, samples=19 00:33:10.183 lat (msec) : 20=0.04%, 50=99.93%, 100=0.04% 00:33:10.183 cpu : usr=99.29%, sys=0.42%, ctx=9, majf=0, minf=53 00:33:10.183 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename1: (groupid=0, jobs=1): err= 0: pid=1758291: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=533, BW=2133KiB/s (2185kB/s)(20.8MiB/10005msec) 00:33:10.183 slat (nsec): min=7245, max=84769, avg=14713.32, stdev=12032.50 00:33:10.183 clat (usec): min=11187, max=59654, avg=29936.27, stdev=4640.17 00:33:10.183 lat (usec): min=11195, max=59676, avg=29950.98, stdev=4640.26 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[19006], 5.00th=[22938], 10.00th=[24511], 20.00th=[27132], 00:33:10.183 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.183 | 70.00th=[29492], 80.00th=[32375], 90.00th=[36439], 95.00th=[38536], 00:33:10.183 | 99.00th=[43254], 99.50th=[47973], 99.90th=[50594], 99.95th=[59507], 00:33:10.183 | 99.99th=[59507] 00:33:10.183 bw ( KiB/s): min= 1664, max= 2240, per=4.09%, avg=2126.32, stdev=176.23, samples=19 00:33:10.183 iops : min= 416, max= 560, avg=531.58, stdev=44.06, samples=19 00:33:10.183 lat (msec) : 20=1.09%, 50=98.61%, 100=0.30% 00:33:10.183 cpu : usr=97.81%, sys=1.35%, ctx=695, majf=0, minf=70 00:33:10.183 IO depths : 1=0.1%, 2=0.1%, 4=3.5%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:33:10.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.183 issued rwts: total=5336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.183 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.183 filename2: (groupid=0, jobs=1): err= 0: pid=1758292: Mon Jun 10 11:40:05 2024 00:33:10.183 read: IOPS=544, BW=2178KiB/s (2230kB/s)(21.3MiB/10003msec) 00:33:10.183 slat (nsec): min=7275, max=95186, avg=11075.75, stdev=7764.03 00:33:10.183 clat (usec): min=4880, max=30947, avg=29294.81, stdev=1824.70 00:33:10.183 lat (usec): min=4895, max=30960, avg=29305.89, stdev=1824.23 00:33:10.183 clat percentiles (usec): 00:33:10.183 | 1.00th=[24249], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.183 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.183 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[30016], 00:33:10.183 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30802], 99.95th=[30802], 00:33:10.183 | 99.99th=[31065] 00:33:10.183 bw ( KiB/s): min= 2048, max= 2352, per=4.19%, avg=2178.53, stdev=51.23, samples=19 00:33:10.183 iops : min= 512, max= 588, avg=544.63, stdev=12.81, samples=19 00:33:10.183 lat (msec) : 10=0.40%, 20=0.40%, 50=99.19% 00:33:10.183 cpu : usr=98.29%, sys=1.03%, ctx=99, majf=0, minf=90 00:33:10.183 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758293: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10004msec) 00:33:10.184 slat (usec): min=7, max=101, avg=31.37, stdev=16.50 00:33:10.184 clat (usec): min=8033, max=50636, avg=29201.02, stdev=1916.19 00:33:10.184 lat (usec): min=8042, max=50667, avg=29232.38, stdev=1917.00 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28181], 5.00th=[28705], 10.00th=[28967], 20.00th=[28967], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:33:10.184 | 99.99th=[50594] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.184 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.184 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.184 cpu : usr=98.99%, sys=0.64%, ctx=66, majf=0, minf=69 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758294: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.1MiB/10002msec) 00:33:10.184 slat (nsec): min=7269, max=92264, avg=26365.69, stdev=17314.55 00:33:10.184 clat (usec): min=22781, max=44711, avg=29309.92, stdev=943.40 00:33:10.184 lat (usec): min=22791, max=44744, avg=29336.29, stdev=944.50 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30802], 99.90th=[44827], 99.95th=[44827], 00:33:10.184 | 99.99th=[44827] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.53, stdev=40.36, samples=19 00:33:10.184 iops : min= 512, max= 544, avg=540.63, stdev=10.09, samples=19 00:33:10.184 lat (msec) : 50=100.00% 00:33:10.184 cpu : usr=98.21%, sys=1.08%, ctx=89, majf=0, minf=47 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758295: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=540, BW=2163KiB/s (2214kB/s)(21.1MiB/10003msec) 00:33:10.184 slat (nsec): min=6450, max=51679, avg=16286.89, stdev=9531.35 00:33:10.184 clat (usec): min=25301, max=41624, avg=29430.36, stdev=749.16 00:33:10.184 lat (usec): min=25310, max=41642, avg=29446.65, stdev=749.06 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.184 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.184 | 99.00th=[30278], 99.50th=[31327], 99.90th=[41681], 99.95th=[41681], 00:33:10.184 | 99.99th=[41681] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.53, stdev=40.36, samples=19 00:33:10.184 iops : min= 512, max= 544, avg=540.63, stdev=10.09, samples=19 00:33:10.184 lat (msec) : 50=100.00% 00:33:10.184 cpu : usr=98.28%, sys=0.98%, ctx=107, majf=0, minf=65 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758296: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10016msec) 00:33:10.184 slat (nsec): min=7230, max=93162, avg=30071.63, stdev=17532.80 00:33:10.184 clat (usec): min=20558, max=33484, avg=29299.49, stdev=630.86 00:33:10.184 lat (usec): min=20566, max=33505, avg=29329.56, stdev=630.36 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28443], 5.00th=[28967], 10.00th=[28967], 20.00th=[29230], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29492], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30802], 99.90th=[33424], 99.95th=[33424], 00:33:10.184 | 99.99th=[33424] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2163.20, stdev=39.40, samples=20 00:33:10.184 iops : min= 512, max= 544, avg=540.80, stdev= 9.85, samples=20 00:33:10.184 lat (msec) : 50=100.00% 00:33:10.184 cpu : usr=98.20%, sys=1.05%, ctx=94, majf=0, minf=73 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758297: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10004msec) 00:33:10.184 slat (nsec): min=7316, max=95362, avg=31594.06, stdev=16662.03 00:33:10.184 clat (usec): min=8043, max=50634, avg=29203.02, stdev=1918.08 00:33:10.184 lat (usec): min=8086, max=50654, avg=29234.61, stdev=1918.64 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28181], 5.00th=[28705], 10.00th=[28967], 20.00th=[28967], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:33:10.184 | 99.99th=[50594] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.184 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.184 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.184 cpu : usr=99.00%, sys=0.59%, ctx=92, majf=0, minf=83 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758298: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=542, BW=2168KiB/s (2220kB/s)(21.2MiB/10006msec) 00:33:10.184 slat (nsec): min=5818, max=97881, avg=32969.49, stdev=17424.20 00:33:10.184 clat (usec): min=8008, max=52709, avg=29200.80, stdev=1991.77 00:33:10.184 lat (usec): min=8022, max=52725, avg=29233.77, stdev=1992.14 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28181], 5.00th=[28705], 10.00th=[28967], 20.00th=[28967], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29230], 60.00th=[29230], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29492], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30540], 99.90th=[52691], 99.95th=[52691], 00:33:10.184 | 99.99th=[52691] 00:33:10.184 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2155.79, stdev=47.95, samples=19 00:33:10.184 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:33:10.184 lat (msec) : 10=0.29%, 20=0.29%, 50=99.12%, 100=0.29% 00:33:10.184 cpu : usr=98.26%, sys=0.96%, ctx=66, majf=0, minf=51 00:33:10.184 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.184 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.184 filename2: (groupid=0, jobs=1): err= 0: pid=1758299: Mon Jun 10 11:40:05 2024 00:33:10.184 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10008msec) 00:33:10.184 slat (nsec): min=5328, max=91389, avg=21549.70, stdev=12047.76 00:33:10.184 clat (usec): min=22974, max=51030, avg=29432.61, stdev=1256.78 00:33:10.184 lat (usec): min=22982, max=51044, avg=29454.16, stdev=1255.89 00:33:10.184 clat percentiles (usec): 00:33:10.184 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:33:10.184 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:33:10.184 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:33:10.184 | 99.00th=[30016], 99.50th=[30802], 99.90th=[51119], 99.95th=[51119], 00:33:10.184 | 99.99th=[51119] 00:33:10.185 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2162.53, stdev=40.36, samples=19 00:33:10.185 iops : min= 512, max= 544, avg=540.63, stdev=10.09, samples=19 00:33:10.185 lat (msec) : 50=99.70%, 100=0.30% 00:33:10.185 cpu : usr=98.92%, sys=0.77%, ctx=32, majf=0, minf=73 00:33:10.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:10.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.185 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:10.185 00:33:10.185 Run status group 0 (all jobs): 00:33:10.185 READ: bw=50.7MiB/s (53.2MB/s), 2133KiB/s-2188KiB/s (2185kB/s-2241kB/s), io=508MiB (533MB), run=10002-10016msec 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 bdev_null0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 [2024-06-10 11:40:05.820732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 bdev_null1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:10.185 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:10.185 { 00:33:10.185 "params": { 00:33:10.185 "name": "Nvme$subsystem", 00:33:10.185 "trtype": "$TEST_TRANSPORT", 00:33:10.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.186 "adrfam": "ipv4", 00:33:10.186 "trsvcid": "$NVMF_PORT", 00:33:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.186 "hdgst": ${hdgst:-false}, 00:33:10.186 "ddgst": ${ddgst:-false} 00:33:10.186 }, 00:33:10.186 "method": "bdev_nvme_attach_controller" 00:33:10.186 } 00:33:10.186 EOF 00:33:10.186 )") 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:10.186 { 00:33:10.186 "params": { 00:33:10.186 "name": "Nvme$subsystem", 00:33:10.186 "trtype": "$TEST_TRANSPORT", 00:33:10.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.186 "adrfam": "ipv4", 00:33:10.186 "trsvcid": "$NVMF_PORT", 00:33:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.186 "hdgst": ${hdgst:-false}, 00:33:10.186 "ddgst": ${ddgst:-false} 00:33:10.186 }, 00:33:10.186 "method": "bdev_nvme_attach_controller" 00:33:10.186 } 00:33:10.186 EOF 00:33:10.186 )") 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:10.186 "params": { 00:33:10.186 "name": "Nvme0", 00:33:10.186 "trtype": "tcp", 00:33:10.186 "traddr": "10.0.0.2", 00:33:10.186 "adrfam": "ipv4", 00:33:10.186 "trsvcid": "4420", 00:33:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.186 "hdgst": false, 00:33:10.186 "ddgst": false 00:33:10.186 }, 00:33:10.186 "method": "bdev_nvme_attach_controller" 00:33:10.186 },{ 00:33:10.186 "params": { 00:33:10.186 "name": "Nvme1", 00:33:10.186 "trtype": "tcp", 00:33:10.186 "traddr": "10.0.0.2", 00:33:10.186 "adrfam": "ipv4", 00:33:10.186 "trsvcid": "4420", 00:33:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.186 "hdgst": false, 00:33:10.186 "ddgst": false 00:33:10.186 }, 00:33:10.186 "method": "bdev_nvme_attach_controller" 00:33:10.186 }' 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.186 11:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.186 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:10.186 ... 00:33:10.186 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:10.186 ... 00:33:10.186 fio-3.35 00:33:10.186 Starting 4 threads 00:33:10.186 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.459 00:33:15.459 filename0: (groupid=0, jobs=1): err= 0: pid=1760377: Mon Jun 10 11:40:11 2024 00:33:15.459 read: IOPS=2247, BW=17.6MiB/s (18.4MB/s)(87.8MiB/5002msec) 00:33:15.459 slat (nsec): min=7239, max=51814, avg=8904.04, stdev=4110.30 00:33:15.459 clat (usec): min=1593, max=6439, avg=3534.47, stdev=630.94 00:33:15.459 lat (usec): min=1619, max=6446, avg=3543.37, stdev=630.87 00:33:15.459 clat percentiles (usec): 00:33:15.459 | 1.00th=[ 2311], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3130], 00:33:15.459 | 30.00th=[ 3228], 40.00th=[ 3326], 50.00th=[ 3425], 60.00th=[ 3490], 00:33:15.459 | 70.00th=[ 3621], 80.00th=[ 3818], 90.00th=[ 4555], 95.00th=[ 5014], 00:33:15.459 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 5866], 99.95th=[ 6325], 00:33:15.459 | 99.99th=[ 6456] 00:33:15.459 bw ( KiB/s): min=17600, max=18404, per=24.83%, avg=17952.44, stdev=282.85, samples=9 00:33:15.459 iops : min= 2200, max= 2300, avg=2244.00, stdev=35.26, samples=9 00:33:15.459 lat (msec) : 2=0.27%, 4=83.77%, 10=15.97% 00:33:15.459 cpu : usr=97.72%, sys=1.98%, ctx=9, majf=0, minf=70 00:33:15.459 IO depths : 1=0.4%, 2=1.1%, 4=71.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 issued rwts: total=11242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.459 filename0: (groupid=0, jobs=1): err= 0: pid=1760378: Mon Jun 10 11:40:11 2024 00:33:15.459 read: IOPS=2244, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5001msec) 00:33:15.459 slat (nsec): min=7230, max=50242, avg=8688.89, stdev=3818.43 00:33:15.459 clat (usec): min=1307, max=6844, avg=3540.33, stdev=614.89 00:33:15.459 lat (usec): min=1314, max=6873, avg=3549.02, stdev=614.83 00:33:15.459 clat percentiles (usec): 00:33:15.459 | 1.00th=[ 2409], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3130], 00:33:15.459 | 30.00th=[ 3261], 40.00th=[ 3326], 50.00th=[ 3425], 60.00th=[ 3490], 00:33:15.459 | 70.00th=[ 3589], 80.00th=[ 3785], 90.00th=[ 4555], 95.00th=[ 4948], 00:33:15.459 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6128], 99.95th=[ 6325], 00:33:15.459 | 99.99th=[ 6783] 00:33:15.459 bw ( KiB/s): min=17536, max=18352, per=24.81%, avg=17942.67, stdev=214.63, samples=9 00:33:15.459 iops : min= 2192, max= 2294, avg=2242.78, stdev=26.84, samples=9 00:33:15.459 lat (msec) : 2=0.27%, 4=84.79%, 10=14.94% 00:33:15.459 cpu : usr=97.26%, sys=2.48%, ctx=14, majf=0, minf=84 00:33:15.459 IO depths : 1=0.6%, 2=1.4%, 4=71.2%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 issued rwts: total=11224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.459 filename1: (groupid=0, jobs=1): err= 0: pid=1760379: Mon Jun 10 11:40:11 2024 00:33:15.459 read: IOPS=2282, BW=17.8MiB/s (18.7MB/s)(89.2MiB/5002msec) 00:33:15.459 slat (nsec): min=7236, max=54219, avg=9106.76, stdev=4015.81 00:33:15.459 clat (usec): min=1201, max=6589, avg=3477.91, stdev=537.28 00:33:15.459 lat (usec): min=1214, max=6598, avg=3487.02, stdev=537.23 00:33:15.459 clat percentiles (usec): 00:33:15.459 | 1.00th=[ 2343], 5.00th=[ 2737], 10.00th=[ 2966], 20.00th=[ 3163], 00:33:15.459 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3392], 60.00th=[ 3490], 00:33:15.459 | 70.00th=[ 3589], 80.00th=[ 3785], 90.00th=[ 4146], 95.00th=[ 4621], 00:33:15.459 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 6128], 99.95th=[ 6456], 00:33:15.459 | 99.99th=[ 6587] 00:33:15.459 bw ( KiB/s): min=17952, max=18560, per=25.25%, avg=18261.33, stdev=213.77, samples=9 00:33:15.459 iops : min= 2244, max= 2320, avg=2282.67, stdev=26.72, samples=9 00:33:15.459 lat (msec) : 2=0.30%, 4=87.44%, 10=12.26% 00:33:15.459 cpu : usr=96.86%, sys=2.86%, ctx=8, majf=0, minf=89 00:33:15.459 IO depths : 1=0.6%, 2=1.6%, 4=71.2%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 issued rwts: total=11419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.459 filename1: (groupid=0, jobs=1): err= 0: pid=1760380: Mon Jun 10 11:40:11 2024 00:33:15.459 read: IOPS=2266, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5003msec) 00:33:15.459 slat (nsec): min=3692, max=56323, avg=8856.27, stdev=4145.66 00:33:15.459 clat (usec): min=1355, max=47453, avg=3505.64, stdev=1322.47 00:33:15.459 lat (usec): min=1362, max=47470, avg=3514.49, stdev=1322.46 00:33:15.459 clat percentiles (usec): 00:33:15.459 | 1.00th=[ 2147], 5.00th=[ 2638], 10.00th=[ 2868], 20.00th=[ 3064], 00:33:15.459 | 30.00th=[ 3195], 40.00th=[ 3294], 50.00th=[ 3392], 60.00th=[ 3458], 00:33:15.459 | 70.00th=[ 3556], 80.00th=[ 3752], 90.00th=[ 4359], 95.00th=[ 4883], 00:33:15.459 | 99.00th=[ 5342], 99.50th=[ 5669], 99.90th=[ 6325], 99.95th=[47449], 00:33:15.459 | 99.99th=[47449] 00:33:15.459 bw ( KiB/s): min=16704, max=18624, per=25.03%, avg=18097.78, stdev=584.85, samples=9 00:33:15.459 iops : min= 2088, max= 2328, avg=2262.22, stdev=73.11, samples=9 00:33:15.459 lat (msec) : 2=0.55%, 4=85.16%, 10=14.22%, 50=0.07% 00:33:15.459 cpu : usr=97.40%, sys=2.32%, ctx=8, majf=0, minf=103 00:33:15.459 IO depths : 1=0.4%, 2=1.1%, 4=70.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.459 issued rwts: total=11337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.459 00:33:15.459 Run status group 0 (all jobs): 00:33:15.459 READ: bw=70.6MiB/s (74.0MB/s), 17.5MiB/s-17.8MiB/s (18.4MB/s-18.7MB/s), io=353MiB (370MB), run=5001-5003msec 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.459 00:33:15.459 real 0m24.107s 00:33:15.459 user 5m4.869s 00:33:15.459 sys 0m3.957s 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 ************************************ 00:33:15.459 END TEST fio_dif_rand_params 00:33:15.459 ************************************ 00:33:15.459 11:40:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:15.459 11:40:12 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:15.459 11:40:12 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:15.459 11:40:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:15.459 ************************************ 00:33:15.459 START TEST fio_dif_digest 00:33:15.460 ************************************ 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:15.460 bdev_null0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:15.460 [2024-06-10 11:40:12.251947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:15.460 { 00:33:15.460 "params": { 00:33:15.460 "name": "Nvme$subsystem", 00:33:15.460 "trtype": "$TEST_TRANSPORT", 00:33:15.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.460 "adrfam": "ipv4", 00:33:15.460 "trsvcid": "$NVMF_PORT", 00:33:15.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.460 "hdgst": ${hdgst:-false}, 00:33:15.460 "ddgst": ${ddgst:-false} 00:33:15.460 }, 00:33:15.460 "method": "bdev_nvme_attach_controller" 00:33:15.460 } 00:33:15.460 EOF 00:33:15.460 )") 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:15.460 "params": { 00:33:15.460 "name": "Nvme0", 00:33:15.460 "trtype": "tcp", 00:33:15.460 "traddr": "10.0.0.2", 00:33:15.460 "adrfam": "ipv4", 00:33:15.460 "trsvcid": "4420", 00:33:15.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.460 "hdgst": true, 00:33:15.460 "ddgst": true 00:33:15.460 }, 00:33:15.460 "method": "bdev_nvme_attach_controller" 00:33:15.460 }' 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:15.460 11:40:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.460 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:15.460 ... 00:33:15.460 fio-3.35 00:33:15.460 Starting 3 threads 00:33:15.460 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.682 00:33:27.682 filename0: (groupid=0, jobs=1): err= 0: pid=1761625: Mon Jun 10 11:40:23 2024 00:33:27.683 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(289MiB/10048msec) 00:33:27.683 slat (nsec): min=3994, max=31033, avg=8374.81, stdev=1220.25 00:33:27.683 clat (usec): min=9638, max=54560, avg=13025.90, stdev=2553.78 00:33:27.683 lat (usec): min=9647, max=54568, avg=13034.27, stdev=2553.77 00:33:27.683 clat percentiles (usec): 00:33:27.683 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:33:27.683 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:33:27.683 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:33:27.683 | 99.00th=[15533], 99.50th=[16581], 99.90th=[53216], 99.95th=[54264], 00:33:27.683 | 99.99th=[54789] 00:33:27.683 bw ( KiB/s): min=26880, max=30976, per=32.69%, avg=29529.60, stdev=1014.78, samples=20 00:33:27.683 iops : min= 210, max= 242, avg=230.70, stdev= 7.93, samples=20 00:33:27.683 lat (msec) : 10=0.09%, 20=99.57%, 100=0.35% 00:33:27.683 cpu : usr=95.84%, sys=3.92%, ctx=16, majf=0, minf=104 00:33:27.683 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:27.683 filename0: (groupid=0, jobs=1): err= 0: pid=1761626: Mon Jun 10 11:40:23 2024 00:33:27.683 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10048msec) 00:33:27.683 slat (nsec): min=7523, max=31602, avg=8930.35, stdev=1588.78 00:33:27.683 clat (usec): min=8742, max=53137, avg=13370.75, stdev=1560.08 00:33:27.683 lat (usec): min=8750, max=53146, avg=13379.68, stdev=1560.09 00:33:27.683 clat percentiles (usec): 00:33:27.683 | 1.00th=[10552], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:33:27.683 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:33:27.683 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:33:27.683 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17433], 99.95th=[47449], 00:33:27.683 | 99.99th=[53216] 00:33:27.683 bw ( KiB/s): min=28160, max=29952, per=31.84%, avg=28761.60, stdev=449.39, samples=20 00:33:27.683 iops : min= 220, max= 234, avg=224.70, stdev= 3.51, samples=20 00:33:27.683 lat (msec) : 10=0.53%, 20=99.38%, 50=0.04%, 100=0.04% 00:33:27.683 cpu : usr=95.73%, sys=4.00%, ctx=18, majf=0, minf=135 00:33:27.683 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:27.683 filename0: (groupid=0, jobs=1): err= 0: pid=1761627: Mon Jun 10 11:40:23 2024 00:33:27.683 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(317MiB/10045msec) 00:33:27.683 slat (nsec): min=7512, max=45668, avg=8313.50, stdev=1278.51 00:33:27.683 clat (usec): min=7456, max=48407, avg=11865.34, stdev=1414.81 00:33:27.683 lat (usec): min=7465, max=48415, avg=11873.65, stdev=1414.85 00:33:27.683 clat percentiles (usec): 00:33:27.683 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:33:27.683 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:33:27.683 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:33:27.683 | 99.00th=[14353], 99.50th=[14615], 99.90th=[16712], 99.95th=[45876], 00:33:27.683 | 99.99th=[48497] 00:33:27.683 bw ( KiB/s): min=31488, max=33792, per=35.88%, avg=32412.75, stdev=554.33, samples=20 00:33:27.683 iops : min= 246, max= 264, avg=253.20, stdev= 4.37, samples=20 00:33:27.683 lat (msec) : 10=3.51%, 20=96.41%, 50=0.08% 00:33:27.683 cpu : usr=95.78%, sys=3.98%, ctx=14, majf=0, minf=171 00:33:27.683 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.683 issued rwts: total=2534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:27.683 00:33:27.683 Run status group 0 (all jobs): 00:33:27.683 READ: bw=88.2MiB/s (92.5MB/s), 28.0MiB/s-31.5MiB/s (29.3MB/s-33.1MB/s), io=887MiB (930MB), run=10045-10048msec 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.683 00:33:27.683 real 0m11.201s 00:33:27.683 user 0m39.175s 00:33:27.683 sys 0m1.479s 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:27.683 11:40:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:27.683 ************************************ 00:33:27.683 END TEST fio_dif_digest 00:33:27.683 ************************************ 00:33:27.683 11:40:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:27.683 11:40:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:27.683 rmmod nvme_tcp 00:33:27.683 rmmod nvme_fabrics 00:33:27.683 rmmod nvme_keyring 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1752702 ']' 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1752702 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1752702 ']' 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1752702 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1752702 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1752702' 00:33:27.683 killing process with pid 1752702 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1752702 00:33:27.683 11:40:23 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1752702 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:27.683 11:40:23 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:30.985 Waiting for block devices as requested 00:33:30.985 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:30.985 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:31.244 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:33:31.244 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:31.244 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:31.505 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:31.505 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:31.505 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:31.765 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:31.765 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:31.765 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:31.765 11:40:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:31.765 11:40:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:31.765 11:40:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:31.765 11:40:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:31.765 11:40:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.765 11:40:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:31.765 11:40:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.310 11:40:31 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:34.310 00:33:34.310 real 1m18.661s 00:33:34.310 user 7m33.517s 00:33:34.310 sys 0m20.561s 00:33:34.310 11:40:31 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:34.310 11:40:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:34.310 ************************************ 00:33:34.310 END TEST nvmf_dif 00:33:34.310 ************************************ 00:33:34.310 11:40:31 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:34.310 11:40:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:34.310 11:40:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:34.310 11:40:31 -- common/autotest_common.sh@10 -- # set +x 00:33:34.310 ************************************ 00:33:34.310 START TEST nvmf_abort_qd_sizes 00:33:34.310 ************************************ 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:34.310 * Looking for test storage... 00:33:34.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.310 11:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:34.311 11:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.311 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:34.311 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:34.311 11:40:31 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:34.311 11:40:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:42.450 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:42.450 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:42.450 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:42.450 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:42.450 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:42.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:33:42.451 00:33:42.451 --- 10.0.0.2 ping statistics --- 00:33:42.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.451 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:33:42.451 00:33:42.451 --- 10.0.0.1 ping statistics --- 00:33:42.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.451 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:42.451 11:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:45.865 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:45.865 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.866 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:47.776 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1771182 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1771182 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1771182 ']' 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:48.037 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.037 [2024-06-10 11:40:45.098752] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:33:48.037 [2024-06-10 11:40:45.098805] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.037 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.037 [2024-06-10 11:40:45.192793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.298 [2024-06-10 11:40:45.287416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.298 [2024-06-10 11:40:45.287477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.298 [2024-06-10 11:40:45.287485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.298 [2024-06-10 11:40:45.287496] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.298 [2024-06-10 11:40:45.287502] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.298 [2024-06-10 11:40:45.287643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.298 [2024-06-10 11:40:45.287790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.298 [2024-06-10 11:40:45.287944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:48.298 [2024-06-10 11:40:45.287944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:33:48.868 11:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:48.868 11:40:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:48.868 ************************************ 00:33:48.868 START TEST spdk_target_abort 00:33:48.868 ************************************ 00:33:48.868 11:40:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:33:48.868 11:40:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:48.868 11:40:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:48.868 11:40:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.868 11:40:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 spdk_targetn1 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 [2024-06-10 11:40:48.878843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 [2024-06-10 11:40:48.916858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:52.165 11:40:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.165 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.465 Initializing NVMe Controllers 00:33:55.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:55.465 Initialization complete. Launching workers. 00:33:55.465 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12128, failed: 0 00:33:55.465 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2356, failed to submit 9772 00:33:55.465 success 744, unsuccess 1612, failed 0 00:33:55.465 11:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:55.465 11:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.465 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.778 Initializing NVMe Controllers 00:33:58.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:58.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:58.778 Initialization complete. Launching workers. 00:33:58.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8460, failed: 0 00:33:58.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1211, failed to submit 7249 00:33:58.778 success 331, unsuccess 880, failed 0 00:33:58.778 11:40:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.778 11:40:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.778 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.073 Initializing NVMe Controllers 00:34:02.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:02.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:02.073 Initialization complete. Launching workers. 00:34:02.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42275, failed: 0 00:34:02.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2668, failed to submit 39607 00:34:02.073 success 602, unsuccess 2066, failed 0 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.073 11:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1771182 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1771182 ']' 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1771182 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:03.984 11:41:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1771182 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1771182' 00:34:03.984 killing process with pid 1771182 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1771182 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1771182 00:34:03.984 00:34:03.984 real 0m15.086s 00:34:03.984 user 1m0.855s 00:34:03.984 sys 0m1.803s 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:03.984 11:41:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:03.984 ************************************ 00:34:03.984 END TEST spdk_target_abort 00:34:03.984 ************************************ 00:34:03.984 11:41:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:03.984 11:41:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:03.984 11:41:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:03.984 11:41:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:04.245 ************************************ 00:34:04.245 START TEST kernel_target_abort 00:34:04.245 ************************************ 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:04.245 11:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:08.450 Waiting for block devices as requested 00:34:08.450 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:08.450 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:08.711 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:08.711 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:34:08.972 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:08.972 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:08.972 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:09.233 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:09.233 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:09.233 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:09.495 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:09.495 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:09.495 No valid GPT data, bailing 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:09.495 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:34:09.757 00:34:09.757 Discovery Log Number of Records 2, Generation counter 2 00:34:09.757 =====Discovery Log Entry 0====== 00:34:09.757 trtype: tcp 00:34:09.757 adrfam: ipv4 00:34:09.757 subtype: current discovery subsystem 00:34:09.757 treq: not specified, sq flow control disable supported 00:34:09.757 portid: 1 00:34:09.757 trsvcid: 4420 00:34:09.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:09.757 traddr: 10.0.0.1 00:34:09.757 eflags: none 00:34:09.757 sectype: none 00:34:09.757 =====Discovery Log Entry 1====== 00:34:09.757 trtype: tcp 00:34:09.757 adrfam: ipv4 00:34:09.757 subtype: nvme subsystem 00:34:09.757 treq: not specified, sq flow control disable supported 00:34:09.757 portid: 1 00:34:09.757 trsvcid: 4420 00:34:09.757 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:09.757 traddr: 10.0.0.1 00:34:09.757 eflags: none 00:34:09.757 sectype: none 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:09.757 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:09.758 11:41:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:09.758 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.060 Initializing NVMe Controllers 00:34:13.060 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:13.060 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:13.060 Initialization complete. Launching workers. 00:34:13.060 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66999, failed: 0 00:34:13.060 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66999, failed to submit 0 00:34:13.060 success 0, unsuccess 66999, failed 0 00:34:13.060 11:41:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:13.060 11:41:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:13.060 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.678 Initializing NVMe Controllers 00:34:15.678 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:15.678 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:15.678 Initialization complete. Launching workers. 00:34:15.678 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 109818, failed: 0 00:34:15.678 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27658, failed to submit 82160 00:34:15.678 success 0, unsuccess 27658, failed 0 00:34:15.678 11:41:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:15.678 11:41:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:15.939 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.238 Initializing NVMe Controllers 00:34:19.238 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:19.238 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:19.238 Initialization complete. Launching workers. 00:34:19.238 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 106730, failed: 0 00:34:19.238 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26662, failed to submit 80068 00:34:19.238 success 0, unsuccess 26662, failed 0 00:34:19.238 11:41:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:19.238 11:41:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:19.238 11:41:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:34:19.238 11:41:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:19.238 11:41:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:19.238 11:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:19.238 11:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:19.238 11:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:19.238 11:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:19.238 11:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:23.443 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:23.443 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:23.444 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:24.825 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:34:24.825 00:34:24.825 real 0m20.793s 00:34:24.825 user 0m9.469s 00:34:24.825 sys 0m6.591s 00:34:24.825 11:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:24.825 11:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.825 ************************************ 00:34:24.825 END TEST kernel_target_abort 00:34:24.825 ************************************ 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:25.095 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:25.096 rmmod nvme_tcp 00:34:25.096 rmmod nvme_fabrics 00:34:25.096 rmmod nvme_keyring 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1771182 ']' 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1771182 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1771182 ']' 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1771182 00:34:25.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1771182) - No such process 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1771182 is not found' 00:34:25.096 Process with pid 1771182 is not found 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:25.096 11:41:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:29.305 Waiting for block devices as requested 00:34:29.305 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:29.305 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:29.566 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:29.566 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:34:29.566 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:29.825 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:29.825 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:29.825 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:30.085 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:30.085 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:30.085 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:30.346 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.346 11:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.255 11:41:29 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:32.255 00:34:32.255 real 0m58.360s 00:34:32.255 user 1m15.740s 00:34:32.255 sys 0m20.133s 00:34:32.255 11:41:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:32.255 11:41:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.255 ************************************ 00:34:32.255 END TEST nvmf_abort_qd_sizes 00:34:32.255 ************************************ 00:34:32.516 11:41:29 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:32.516 11:41:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:32.516 11:41:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:32.516 11:41:29 -- common/autotest_common.sh@10 -- # set +x 00:34:32.516 ************************************ 00:34:32.516 START TEST keyring_file 00:34:32.516 ************************************ 00:34:32.516 11:41:29 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:32.516 * Looking for test storage... 00:34:32.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.516 11:41:29 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.516 11:41:29 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.516 11:41:29 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.516 11:41:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.516 11:41:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.516 11:41:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.516 11:41:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:32.516 11:41:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@47 -- # : 0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0sSZovb6xd 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0sSZovb6xd 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0sSZovb6xd 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0sSZovb6xd 00:34:32.516 11:41:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WPVNBn08Kg 00:34:32.516 11:41:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:32.516 11:41:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:32.777 11:41:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WPVNBn08Kg 00:34:32.777 11:41:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WPVNBn08Kg 00:34:32.777 11:41:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WPVNBn08Kg 00:34:32.777 11:41:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=1781362 00:34:32.777 11:41:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1781362 00:34:32.777 11:41:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1781362 ']' 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:32.777 11:41:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:32.777 [2024-06-10 11:41:29.839077] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:34:32.777 [2024-06-10 11:41:29.839152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781362 ] 00:34:32.777 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.777 [2024-06-10 11:41:29.926987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.037 [2024-06-10 11:41:30.023206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:33.608 11:41:30 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:33.608 [2024-06-10 11:41:30.700876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.608 null0 00:34:33.608 [2024-06-10 11:41:30.732925] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:33.608 [2024-06-10 11:41:30.733465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:33.608 [2024-06-10 11:41:30.740954] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.608 11:41:30 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:33.608 [2024-06-10 11:41:30.756984] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:33.608 request: 00:34:33.608 { 00:34:33.608 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.608 "secure_channel": false, 00:34:33.608 "listen_address": { 00:34:33.608 "trtype": "tcp", 00:34:33.608 "traddr": "127.0.0.1", 00:34:33.608 "trsvcid": "4420" 00:34:33.608 }, 00:34:33.608 "method": "nvmf_subsystem_add_listener", 00:34:33.608 "req_id": 1 00:34:33.608 } 00:34:33.608 Got JSON-RPC error response 00:34:33.608 response: 00:34:33.608 { 00:34:33.608 "code": -32602, 00:34:33.608 "message": "Invalid parameters" 00:34:33.608 } 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:33.608 11:41:30 keyring_file -- keyring/file.sh@46 -- # bperfpid=1781644 00:34:33.608 11:41:30 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1781644 /var/tmp/bperf.sock 00:34:33.608 11:41:30 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1781644 ']' 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:33.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:33.608 11:41:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:33.608 [2024-06-10 11:41:30.816497] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:34:33.608 [2024-06-10 11:41:30.816558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781644 ] 00:34:33.868 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.868 [2024-06-10 11:41:30.882896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.868 [2024-06-10 11:41:30.953922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.438 11:41:31 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:34.438 11:41:31 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:34.438 11:41:31 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:34.438 11:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:34.698 11:41:31 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WPVNBn08Kg 00:34:34.698 11:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WPVNBn08Kg 00:34:34.959 11:41:32 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:34:34.959 11:41:32 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:34:34.959 11:41:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:34.959 11:41:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:34.959 11:41:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.218 11:41:32 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.0sSZovb6xd == \/\t\m\p\/\t\m\p\.\0\s\S\Z\o\v\b\6\x\d ]] 00:34:35.218 11:41:32 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:34:35.218 11:41:32 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:35.218 11:41:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.218 11:41:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:35.218 11:41:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.478 11:41:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WPVNBn08Kg == \/\t\m\p\/\t\m\p\.\W\P\V\N\B\n\0\8\K\g ]] 00:34:35.478 11:41:32 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.478 11:41:32 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:34:35.478 11:41:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:35.478 11:41:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.738 11:41:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:35.738 11:41:32 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:35.738 11:41:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:35.998 [2024-06-10 11:41:33.014405] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:35.998 nvme0n1 00:34:35.998 11:41:33 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:34:35.998 11:41:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:35.998 11:41:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:35.998 11:41:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.998 11:41:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:35.998 11:41:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.258 11:41:33 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:34:36.258 11:41:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:34:36.258 11:41:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:36.258 11:41:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:36.258 11:41:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:36.258 11:41:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:36.258 11:41:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.519 11:41:33 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:34:36.519 11:41:33 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.519 Running I/O for 1 seconds... 00:34:37.459 00:34:37.459 Latency(us) 00:34:37.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:37.459 nvme0n1 : 1.01 13586.09 53.07 0.00 0.00 9391.17 3327.21 13913.80 00:34:37.459 =================================================================================================================== 00:34:37.459 Total : 13586.09 53.07 0.00 0.00 9391.17 3327.21 13913.80 00:34:37.459 0 00:34:37.459 11:41:34 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:37.459 11:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:37.719 11:41:34 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:34:37.719 11:41:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:37.719 11:41:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:37.719 11:41:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:37.719 11:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:37.719 11:41:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:37.979 11:41:35 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:34:37.979 11:41:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:34:37.979 11:41:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:37.979 11:41:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:37.979 11:41:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:37.979 11:41:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:37.979 11:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:38.239 11:41:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:38.239 11:41:35 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:38.239 [2024-06-10 11:41:35.398032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:38.239 [2024-06-10 11:41:35.398881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8613b0 (107): Transport endpoint is not connected 00:34:38.239 [2024-06-10 11:41:35.399876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8613b0 (9): Bad file descriptor 00:34:38.239 [2024-06-10 11:41:35.400877] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.239 [2024-06-10 11:41:35.400886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:38.239 [2024-06-10 11:41:35.400892] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.239 request: 00:34:38.239 { 00:34:38.239 "name": "nvme0", 00:34:38.239 "trtype": "tcp", 00:34:38.239 "traddr": "127.0.0.1", 00:34:38.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.239 "adrfam": "ipv4", 00:34:38.239 "trsvcid": "4420", 00:34:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.239 "psk": "key1", 00:34:38.239 "method": "bdev_nvme_attach_controller", 00:34:38.239 "req_id": 1 00:34:38.239 } 00:34:38.239 Got JSON-RPC error response 00:34:38.239 response: 00:34:38.239 { 00:34:38.239 "code": -5, 00:34:38.239 "message": "Input/output error" 00:34:38.239 } 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:38.239 11:41:35 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:38.239 11:41:35 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:38.239 11:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:38.499 11:41:35 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:34:38.499 11:41:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:34:38.499 11:41:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:38.499 11:41:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:38.499 11:41:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:38.499 11:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:38.499 11:41:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:38.760 11:41:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:38.760 11:41:35 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:34:38.760 11:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:39.019 11:41:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:34:39.019 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:39.019 11:41:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:34:39.019 11:41:36 keyring_file -- keyring/file.sh@77 -- # jq length 00:34:39.020 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:39.278 11:41:36 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:34:39.278 11:41:36 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.0sSZovb6xd 00:34:39.278 11:41:36 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:39.278 11:41:36 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.278 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.538 [2024-06-10 11:41:36.569096] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0sSZovb6xd': 0100660 00:34:39.538 [2024-06-10 11:41:36.569120] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:39.538 request: 00:34:39.538 { 00:34:39.538 "name": "key0", 00:34:39.538 "path": "/tmp/tmp.0sSZovb6xd", 00:34:39.538 "method": "keyring_file_add_key", 00:34:39.538 "req_id": 1 00:34:39.538 } 00:34:39.538 Got JSON-RPC error response 00:34:39.538 response: 00:34:39.538 { 00:34:39.538 "code": -1, 00:34:39.538 "message": "Operation not permitted" 00:34:39.538 } 00:34:39.538 11:41:36 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:39.538 11:41:36 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:39.538 11:41:36 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:39.538 11:41:36 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:39.538 11:41:36 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.0sSZovb6xd 00:34:39.538 11:41:36 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.538 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0sSZovb6xd 00:34:39.798 11:41:36 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.0sSZovb6xd 00:34:39.798 11:41:36 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:39.798 11:41:36 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:34:39.798 11:41:36 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:39.798 11:41:36 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:39.798 11:41:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:40.057 [2024-06-10 11:41:37.142557] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0sSZovb6xd': No such file or directory 00:34:40.057 [2024-06-10 11:41:37.142575] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:40.057 [2024-06-10 11:41:37.142596] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:40.057 [2024-06-10 11:41:37.142602] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:40.058 [2024-06-10 11:41:37.142608] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:40.058 request: 00:34:40.058 { 00:34:40.058 "name": "nvme0", 00:34:40.058 "trtype": "tcp", 00:34:40.058 "traddr": "127.0.0.1", 00:34:40.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:40.058 "adrfam": "ipv4", 00:34:40.058 "trsvcid": "4420", 00:34:40.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:40.058 "psk": "key0", 00:34:40.058 "method": "bdev_nvme_attach_controller", 00:34:40.058 "req_id": 1 00:34:40.058 } 00:34:40.058 Got JSON-RPC error response 00:34:40.058 response: 00:34:40.058 { 00:34:40.058 "code": -19, 00:34:40.058 "message": "No such device" 00:34:40.058 } 00:34:40.058 11:41:37 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:40.058 11:41:37 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:40.058 11:41:37 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:40.058 11:41:37 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:40.058 11:41:37 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:34:40.058 11:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:40.321 11:41:37 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.huc5PbAjMm 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:40.321 11:41:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.huc5PbAjMm 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.huc5PbAjMm 00:34:40.321 11:41:37 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.huc5PbAjMm 00:34:40.321 11:41:37 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.huc5PbAjMm 00:34:40.321 11:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.huc5PbAjMm 00:34:40.650 11:41:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:40.650 11:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:40.650 nvme0n1 00:34:40.911 11:41:37 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:34:40.911 11:41:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:40.911 11:41:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:40.911 11:41:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.911 11:41:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:40.911 11:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.911 11:41:38 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:34:40.911 11:41:38 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:34:40.911 11:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:41.171 11:41:38 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:34:41.172 11:41:38 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:34:41.172 11:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:41.172 11:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:41.172 11:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:41.432 11:41:38 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:34:41.432 11:41:38 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:41.432 11:41:38 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:34:41.432 11:41:38 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:41.432 11:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:41.693 11:41:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:34:41.693 11:41:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:34:41.693 11:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:41.953 11:41:39 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:34:41.953 11:41:39 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.huc5PbAjMm 00:34:41.953 11:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.huc5PbAjMm 00:34:42.214 11:41:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WPVNBn08Kg 00:34:42.214 11:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WPVNBn08Kg 00:34:42.214 11:41:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:42.214 11:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:42.475 nvme0n1 00:34:42.475 11:41:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:34:42.475 11:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:42.735 11:41:39 keyring_file -- keyring/file.sh@112 -- # config='{ 00:34:42.735 "subsystems": [ 00:34:42.735 { 00:34:42.735 "subsystem": "keyring", 00:34:42.735 "config": [ 00:34:42.735 { 00:34:42.735 "method": "keyring_file_add_key", 00:34:42.735 "params": { 00:34:42.735 "name": "key0", 00:34:42.735 "path": "/tmp/tmp.huc5PbAjMm" 00:34:42.735 } 00:34:42.735 }, 00:34:42.735 { 00:34:42.735 "method": "keyring_file_add_key", 00:34:42.735 "params": { 00:34:42.735 "name": "key1", 00:34:42.735 "path": "/tmp/tmp.WPVNBn08Kg" 00:34:42.735 } 00:34:42.735 } 00:34:42.735 ] 00:34:42.735 }, 00:34:42.735 { 00:34:42.735 "subsystem": "iobuf", 00:34:42.735 "config": [ 00:34:42.735 { 00:34:42.736 "method": "iobuf_set_options", 00:34:42.736 "params": { 00:34:42.736 "small_pool_count": 8192, 00:34:42.736 "large_pool_count": 1024, 00:34:42.736 "small_bufsize": 8192, 00:34:42.736 "large_bufsize": 135168 00:34:42.736 } 00:34:42.736 } 00:34:42.736 ] 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "subsystem": "sock", 00:34:42.736 "config": [ 00:34:42.736 { 00:34:42.736 "method": "sock_set_default_impl", 00:34:42.736 "params": { 00:34:42.736 "impl_name": "posix" 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "sock_impl_set_options", 00:34:42.736 "params": { 00:34:42.736 "impl_name": "ssl", 00:34:42.736 "recv_buf_size": 4096, 00:34:42.736 "send_buf_size": 4096, 00:34:42.736 "enable_recv_pipe": true, 00:34:42.736 "enable_quickack": false, 00:34:42.736 "enable_placement_id": 0, 00:34:42.736 "enable_zerocopy_send_server": true, 00:34:42.736 "enable_zerocopy_send_client": false, 00:34:42.736 "zerocopy_threshold": 0, 00:34:42.736 "tls_version": 0, 00:34:42.736 "enable_ktls": false 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "sock_impl_set_options", 00:34:42.736 "params": { 00:34:42.736 "impl_name": "posix", 00:34:42.736 "recv_buf_size": 2097152, 00:34:42.736 "send_buf_size": 2097152, 00:34:42.736 "enable_recv_pipe": true, 00:34:42.736 "enable_quickack": false, 00:34:42.736 "enable_placement_id": 0, 00:34:42.736 "enable_zerocopy_send_server": true, 00:34:42.736 "enable_zerocopy_send_client": false, 00:34:42.736 "zerocopy_threshold": 0, 00:34:42.736 "tls_version": 0, 00:34:42.736 "enable_ktls": false 00:34:42.736 } 00:34:42.736 } 00:34:42.736 ] 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "subsystem": "vmd", 00:34:42.736 "config": [] 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "subsystem": "accel", 00:34:42.736 "config": [ 00:34:42.736 { 00:34:42.736 "method": "accel_set_options", 00:34:42.736 "params": { 00:34:42.736 "small_cache_size": 128, 00:34:42.736 "large_cache_size": 16, 00:34:42.736 "task_count": 2048, 00:34:42.736 "sequence_count": 2048, 00:34:42.736 "buf_count": 2048 00:34:42.736 } 00:34:42.736 } 00:34:42.736 ] 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "subsystem": "bdev", 00:34:42.736 "config": [ 00:34:42.736 { 00:34:42.736 "method": "bdev_set_options", 00:34:42.736 "params": { 00:34:42.736 "bdev_io_pool_size": 65535, 00:34:42.736 "bdev_io_cache_size": 256, 00:34:42.736 "bdev_auto_examine": true, 00:34:42.736 "iobuf_small_cache_size": 128, 00:34:42.736 "iobuf_large_cache_size": 16 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_raid_set_options", 00:34:42.736 "params": { 00:34:42.736 "process_window_size_kb": 1024 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_iscsi_set_options", 00:34:42.736 "params": { 00:34:42.736 "timeout_sec": 30 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_nvme_set_options", 00:34:42.736 "params": { 00:34:42.736 "action_on_timeout": "none", 00:34:42.736 "timeout_us": 0, 00:34:42.736 "timeout_admin_us": 0, 00:34:42.736 "keep_alive_timeout_ms": 10000, 00:34:42.736 "arbitration_burst": 0, 00:34:42.736 "low_priority_weight": 0, 00:34:42.736 "medium_priority_weight": 0, 00:34:42.736 "high_priority_weight": 0, 00:34:42.736 "nvme_adminq_poll_period_us": 10000, 00:34:42.736 "nvme_ioq_poll_period_us": 0, 00:34:42.736 "io_queue_requests": 512, 00:34:42.736 "delay_cmd_submit": true, 00:34:42.736 "transport_retry_count": 4, 00:34:42.736 "bdev_retry_count": 3, 00:34:42.736 "transport_ack_timeout": 0, 00:34:42.736 "ctrlr_loss_timeout_sec": 0, 00:34:42.736 "reconnect_delay_sec": 0, 00:34:42.736 "fast_io_fail_timeout_sec": 0, 00:34:42.736 "disable_auto_failback": false, 00:34:42.736 "generate_uuids": false, 00:34:42.736 "transport_tos": 0, 00:34:42.736 "nvme_error_stat": false, 00:34:42.736 "rdma_srq_size": 0, 00:34:42.736 "io_path_stat": false, 00:34:42.736 "allow_accel_sequence": false, 00:34:42.736 "rdma_max_cq_size": 0, 00:34:42.736 "rdma_cm_event_timeout_ms": 0, 00:34:42.736 "dhchap_digests": [ 00:34:42.736 "sha256", 00:34:42.736 "sha384", 00:34:42.736 "sha512" 00:34:42.736 ], 00:34:42.736 "dhchap_dhgroups": [ 00:34:42.736 "null", 00:34:42.736 "ffdhe2048", 00:34:42.736 "ffdhe3072", 00:34:42.736 "ffdhe4096", 00:34:42.736 "ffdhe6144", 00:34:42.736 "ffdhe8192" 00:34:42.736 ] 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_nvme_attach_controller", 00:34:42.736 "params": { 00:34:42.736 "name": "nvme0", 00:34:42.736 "trtype": "TCP", 00:34:42.736 "adrfam": "IPv4", 00:34:42.736 "traddr": "127.0.0.1", 00:34:42.736 "trsvcid": "4420", 00:34:42.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.736 "prchk_reftag": false, 00:34:42.736 "prchk_guard": false, 00:34:42.736 "ctrlr_loss_timeout_sec": 0, 00:34:42.736 "reconnect_delay_sec": 0, 00:34:42.736 "fast_io_fail_timeout_sec": 0, 00:34:42.736 "psk": "key0", 00:34:42.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.736 "hdgst": false, 00:34:42.736 "ddgst": false 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_nvme_set_hotplug", 00:34:42.736 "params": { 00:34:42.736 "period_us": 100000, 00:34:42.736 "enable": false 00:34:42.736 } 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "method": "bdev_wait_for_examine" 00:34:42.736 } 00:34:42.736 ] 00:34:42.736 }, 00:34:42.736 { 00:34:42.736 "subsystem": "nbd", 00:34:42.736 "config": [] 00:34:42.736 } 00:34:42.736 ] 00:34:42.736 }' 00:34:42.736 11:41:39 keyring_file -- keyring/file.sh@114 -- # killprocess 1781644 00:34:42.736 11:41:39 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1781644 ']' 00:34:42.736 11:41:39 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1781644 00:34:42.736 11:41:39 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:42.736 11:41:39 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:42.736 11:41:39 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1781644 00:34:42.997 11:41:40 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:42.997 11:41:40 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1781644' 00:34:42.998 killing process with pid 1781644 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@968 -- # kill 1781644 00:34:42.998 Received shutdown signal, test time was about 1.000000 seconds 00:34:42.998 00:34:42.998 Latency(us) 00:34:42.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.998 =================================================================================================================== 00:34:42.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@973 -- # wait 1781644 00:34:42.998 11:41:40 keyring_file -- keyring/file.sh@117 -- # bperfpid=1783303 00:34:42.998 11:41:40 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1783303 /var/tmp/bperf.sock 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1783303 ']' 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:42.998 11:41:40 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:42.998 11:41:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:42.998 11:41:40 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:34:42.998 "subsystems": [ 00:34:42.998 { 00:34:42.998 "subsystem": "keyring", 00:34:42.998 "config": [ 00:34:42.998 { 00:34:42.998 "method": "keyring_file_add_key", 00:34:42.998 "params": { 00:34:42.998 "name": "key0", 00:34:42.998 "path": "/tmp/tmp.huc5PbAjMm" 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "keyring_file_add_key", 00:34:42.998 "params": { 00:34:42.998 "name": "key1", 00:34:42.998 "path": "/tmp/tmp.WPVNBn08Kg" 00:34:42.998 } 00:34:42.998 } 00:34:42.998 ] 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "subsystem": "iobuf", 00:34:42.998 "config": [ 00:34:42.998 { 00:34:42.998 "method": "iobuf_set_options", 00:34:42.998 "params": { 00:34:42.998 "small_pool_count": 8192, 00:34:42.998 "large_pool_count": 1024, 00:34:42.998 "small_bufsize": 8192, 00:34:42.998 "large_bufsize": 135168 00:34:42.998 } 00:34:42.998 } 00:34:42.998 ] 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "subsystem": "sock", 00:34:42.998 "config": [ 00:34:42.998 { 00:34:42.998 "method": "sock_set_default_impl", 00:34:42.998 "params": { 00:34:42.998 "impl_name": "posix" 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "sock_impl_set_options", 00:34:42.998 "params": { 00:34:42.998 "impl_name": "ssl", 00:34:42.998 "recv_buf_size": 4096, 00:34:42.998 "send_buf_size": 4096, 00:34:42.998 "enable_recv_pipe": true, 00:34:42.998 "enable_quickack": false, 00:34:42.998 "enable_placement_id": 0, 00:34:42.998 "enable_zerocopy_send_server": true, 00:34:42.998 "enable_zerocopy_send_client": false, 00:34:42.998 "zerocopy_threshold": 0, 00:34:42.998 "tls_version": 0, 00:34:42.998 "enable_ktls": false 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "sock_impl_set_options", 00:34:42.998 "params": { 00:34:42.998 "impl_name": "posix", 00:34:42.998 "recv_buf_size": 2097152, 00:34:42.998 "send_buf_size": 2097152, 00:34:42.998 "enable_recv_pipe": true, 00:34:42.998 "enable_quickack": false, 00:34:42.998 "enable_placement_id": 0, 00:34:42.998 "enable_zerocopy_send_server": true, 00:34:42.998 "enable_zerocopy_send_client": false, 00:34:42.998 "zerocopy_threshold": 0, 00:34:42.998 "tls_version": 0, 00:34:42.998 "enable_ktls": false 00:34:42.998 } 00:34:42.998 } 00:34:42.998 ] 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "subsystem": "vmd", 00:34:42.998 "config": [] 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "subsystem": "accel", 00:34:42.998 "config": [ 00:34:42.998 { 00:34:42.998 "method": "accel_set_options", 00:34:42.998 "params": { 00:34:42.998 "small_cache_size": 128, 00:34:42.998 "large_cache_size": 16, 00:34:42.998 "task_count": 2048, 00:34:42.998 "sequence_count": 2048, 00:34:42.998 "buf_count": 2048 00:34:42.998 } 00:34:42.998 } 00:34:42.998 ] 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "subsystem": "bdev", 00:34:42.998 "config": [ 00:34:42.998 { 00:34:42.998 "method": "bdev_set_options", 00:34:42.998 "params": { 00:34:42.998 "bdev_io_pool_size": 65535, 00:34:42.998 "bdev_io_cache_size": 256, 00:34:42.998 "bdev_auto_examine": true, 00:34:42.998 "iobuf_small_cache_size": 128, 00:34:42.998 "iobuf_large_cache_size": 16 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "bdev_raid_set_options", 00:34:42.998 "params": { 00:34:42.998 "process_window_size_kb": 1024 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "bdev_iscsi_set_options", 00:34:42.998 "params": { 00:34:42.998 "timeout_sec": 30 00:34:42.998 } 00:34:42.998 }, 00:34:42.998 { 00:34:42.998 "method": "bdev_nvme_set_options", 00:34:42.998 "params": { 00:34:42.999 "action_on_timeout": "none", 00:34:42.999 "timeout_us": 0, 00:34:42.999 "timeout_admin_us": 0, 00:34:42.999 "keep_alive_timeout_ms": 10000, 00:34:42.999 "arbitration_burst": 0, 00:34:42.999 "low_priority_weight": 0, 00:34:42.999 "medium_priority_weight": 0, 00:34:42.999 "high_priority_weight": 0, 00:34:42.999 "nvme_adminq_poll_period_us": 10000, 00:34:42.999 "nvme_ioq_poll_period_us": 0, 00:34:42.999 "io_queue_requests": 512, 00:34:42.999 "delay_cmd_submit": true, 00:34:42.999 "transport_retry_count": 4, 00:34:42.999 "bdev_retry_count": 3, 00:34:42.999 "transport_ack_timeout": 0, 00:34:42.999 "ctrlr_loss_timeout_sec": 0, 00:34:42.999 "reconnect_delay_sec": 0, 00:34:42.999 "fast_io_fail_timeout_sec": 0, 00:34:42.999 "disable_auto_failback": false, 00:34:42.999 "generate_uuids": false, 00:34:42.999 "transport_tos": 0, 00:34:42.999 "nvme_error_stat": false, 00:34:42.999 "rdma_srq_size": 0, 00:34:42.999 "io_path_stat": false, 00:34:42.999 "allow_accel_sequence": false, 00:34:42.999 "rdma_max_cq_size": 0, 00:34:42.999 "rdma_cm_event_timeout_ms": 0, 00:34:42.999 "dhchap_digests": [ 00:34:42.999 "sha256", 00:34:42.999 "sha384", 00:34:42.999 "sha512" 00:34:42.999 ], 00:34:42.999 "dhchap_dhgroups": [ 00:34:42.999 "null", 00:34:42.999 "ffdhe2048", 00:34:42.999 "ffdhe3072", 00:34:42.999 "ffdhe4096", 00:34:42.999 "ffdhe6144", 00:34:42.999 "ffdhe8192" 00:34:42.999 ] 00:34:42.999 } 00:34:42.999 }, 00:34:42.999 { 00:34:42.999 "method": "bdev_nvme_attach_controller", 00:34:42.999 "params": { 00:34:42.999 "name": "nvme0", 00:34:42.999 "trtype": "TCP", 00:34:42.999 "adrfam": "IPv4", 00:34:42.999 "traddr": "127.0.0.1", 00:34:42.999 "trsvcid": "4420", 00:34:42.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.999 "prchk_reftag": false, 00:34:42.999 "prchk_guard": false, 00:34:42.999 "ctrlr_loss_timeout_sec": 0, 00:34:42.999 "reconnect_delay_sec": 0, 00:34:42.999 "fast_io_fail_timeout_sec": 0, 00:34:42.999 "psk": "key0", 00:34:42.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.999 "hdgst": false, 00:34:42.999 "ddgst": false 00:34:42.999 } 00:34:42.999 }, 00:34:42.999 { 00:34:42.999 "method": "bdev_nvme_set_hotplug", 00:34:42.999 "params": { 00:34:42.999 "period_us": 100000, 00:34:42.999 "enable": false 00:34:42.999 } 00:34:42.999 }, 00:34:42.999 { 00:34:42.999 "method": "bdev_wait_for_examine" 00:34:42.999 } 00:34:42.999 ] 00:34:42.999 }, 00:34:42.999 { 00:34:42.999 "subsystem": "nbd", 00:34:42.999 "config": [] 00:34:42.999 } 00:34:42.999 ] 00:34:42.999 }' 00:34:42.999 [2024-06-10 11:41:40.189097] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:34:42.999 [2024-06-10 11:41:40.189151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783303 ] 00:34:42.999 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.261 [2024-06-10 11:41:40.250702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.261 [2024-06-10 11:41:40.311465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.261 [2024-06-10 11:41:40.455849] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:43.832 11:41:41 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:43.832 11:41:41 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:43.832 11:41:41 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:34:43.832 11:41:41 keyring_file -- keyring/file.sh@120 -- # jq length 00:34:43.832 11:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:44.092 11:41:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:34:44.092 11:41:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:34:44.092 11:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:44.092 11:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:44.092 11:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:44.092 11:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:44.092 11:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:44.353 11:41:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:44.353 11:41:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:34:44.353 11:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:44.353 11:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:44.353 11:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:44.353 11:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:44.353 11:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:44.353 11:41:41 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:34:44.613 11:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.huc5PbAjMm /tmp/tmp.WPVNBn08Kg 00:34:44.613 11:41:41 keyring_file -- keyring/file.sh@20 -- # killprocess 1783303 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1783303 ']' 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1783303 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1783303 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1783303' 00:34:44.613 killing process with pid 1783303 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@968 -- # kill 1783303 00:34:44.613 Received shutdown signal, test time was about 1.000000 seconds 00:34:44.613 00:34:44.613 Latency(us) 00:34:44.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.613 =================================================================================================================== 00:34:44.613 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:44.613 11:41:41 keyring_file -- common/autotest_common.sh@973 -- # wait 1783303 00:34:44.896 11:41:41 keyring_file -- keyring/file.sh@21 -- # killprocess 1781362 00:34:44.896 11:41:41 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1781362 ']' 00:34:44.896 11:41:41 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1781362 00:34:44.896 11:41:41 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:44.896 11:41:41 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:44.896 11:41:41 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1781362 00:34:44.896 11:41:42 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:44.896 11:41:42 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:44.896 11:41:42 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1781362' 00:34:44.896 killing process with pid 1781362 00:34:44.896 11:41:42 keyring_file -- common/autotest_common.sh@968 -- # kill 1781362 00:34:44.896 [2024-06-10 11:41:42.008277] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:44.896 11:41:42 keyring_file -- common/autotest_common.sh@973 -- # wait 1781362 00:34:45.157 00:34:45.157 real 0m12.683s 00:34:45.157 user 0m30.734s 00:34:45.157 sys 0m2.839s 00:34:45.158 11:41:42 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.158 11:41:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:45.158 ************************************ 00:34:45.158 END TEST keyring_file 00:34:45.158 ************************************ 00:34:45.158 11:41:42 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:34:45.158 11:41:42 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:45.158 11:41:42 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:45.158 11:41:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:45.158 11:41:42 -- common/autotest_common.sh@10 -- # set +x 00:34:45.158 ************************************ 00:34:45.158 START TEST keyring_linux 00:34:45.158 ************************************ 00:34:45.158 11:41:42 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:45.158 * Looking for test storage... 00:34:45.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:45.418 11:41:42 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:45.418 11:41:42 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.418 11:41:42 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.418 11:41:42 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.418 11:41:42 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.418 11:41:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.418 11:41:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.418 11:41:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.418 11:41:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:45.418 11:41:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.418 11:41:42 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.418 11:41:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:45.418 11:41:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:45.418 11:41:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:45.418 11:41:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:45.419 /tmp/:spdk-test:key0 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:45.419 11:41:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:45.419 11:41:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:45.419 /tmp/:spdk-test:key1 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1783707 00:34:45.419 11:41:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1783707 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1783707 ']' 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:45.419 11:41:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:45.419 [2024-06-10 11:41:42.568381] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:34:45.419 [2024-06-10 11:41:42.568458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783707 ] 00:34:45.419 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.679 [2024-06-10 11:41:42.652896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.679 [2024-06-10 11:41:42.721319] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.250 11:41:43 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:46.250 11:41:43 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:34:46.250 11:41:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:46.250 11:41:43 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.250 11:41:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:46.250 [2024-06-10 11:41:43.426929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.250 null0 00:34:46.250 [2024-06-10 11:41:43.458979] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:46.250 [2024-06-10 11:41:43.459449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.510 11:41:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:46.510 946743231 00:34:46.510 11:41:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:46.510 923739072 00:34:46.510 11:41:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1783935 00:34:46.510 11:41:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1783935 /var/tmp/bperf.sock 00:34:46.510 11:41:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1783935 ']' 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:46.510 11:41:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:46.510 [2024-06-10 11:41:43.542819] Starting SPDK v24.09-pre git sha1 3b7525570 / DPDK 24.03.0 initialization... 00:34:46.510 [2024-06-10 11:41:43.542900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783935 ] 00:34:46.510 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.510 [2024-06-10 11:41:43.606055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.510 [2024-06-10 11:41:43.667151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.452 11:41:44 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:47.452 11:41:44 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:34:47.452 11:41:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:47.452 11:41:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:47.452 11:41:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:47.452 11:41:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:47.712 11:41:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:47.712 11:41:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:47.972 [2024-06-10 11:41:45.020279] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:47.972 nvme0n1 00:34:47.972 11:41:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:47.972 11:41:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:47.972 11:41:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:47.972 11:41:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:47.972 11:41:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:47.972 11:41:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:48.232 11:41:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:48.232 11:41:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:48.232 11:41:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:48.232 11:41:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:48.232 11:41:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:48.232 11:41:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:48.233 11:41:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@25 -- # sn=946743231 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 946743231 == \9\4\6\7\4\3\2\3\1 ]] 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 946743231 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:48.492 11:41:45 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:48.492 Running I/O for 1 seconds... 00:34:49.530 00:34:49.530 Latency(us) 00:34:49.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.530 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:49.530 nvme0n1 : 1.01 15887.15 62.06 0.00 0.00 8021.47 6805.66 17140.18 00:34:49.530 =================================================================================================================== 00:34:49.530 Total : 15887.15 62.06 0.00 0.00 8021.47 6805.66 17140.18 00:34:49.530 0 00:34:49.530 11:41:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:49.530 11:41:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:49.791 11:41:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:49.791 11:41:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:49.791 11:41:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:49.791 11:41:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:49.791 11:41:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:49.791 11:41:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:50.052 11:41:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:50.052 [2024-06-10 11:41:47.221080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:50.052 [2024-06-10 11:41:47.221779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe62d0 (107): Transport endpoint is not connected 00:34:50.052 [2024-06-10 11:41:47.222773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe62d0 (9): Bad file descriptor 00:34:50.052 [2024-06-10 11:41:47.223775] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:50.052 [2024-06-10 11:41:47.223784] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:50.052 [2024-06-10 11:41:47.223790] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:50.052 request: 00:34:50.052 { 00:34:50.052 "name": "nvme0", 00:34:50.052 "trtype": "tcp", 00:34:50.052 "traddr": "127.0.0.1", 00:34:50.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.052 "adrfam": "ipv4", 00:34:50.052 "trsvcid": "4420", 00:34:50.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.052 "psk": ":spdk-test:key1", 00:34:50.052 "method": "bdev_nvme_attach_controller", 00:34:50.052 "req_id": 1 00:34:50.052 } 00:34:50.052 Got JSON-RPC error response 00:34:50.052 response: 00:34:50.052 { 00:34:50.052 "code": -5, 00:34:50.052 "message": "Input/output error" 00:34:50.052 } 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@33 -- # sn=946743231 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 946743231 00:34:50.052 1 links removed 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@33 -- # sn=923739072 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 923739072 00:34:50.052 1 links removed 00:34:50.052 11:41:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1783935 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1783935 ']' 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1783935 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:50.052 11:41:47 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1783935 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1783935' 00:34:50.314 killing process with pid 1783935 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@968 -- # kill 1783935 00:34:50.314 Received shutdown signal, test time was about 1.000000 seconds 00:34:50.314 00:34:50.314 Latency(us) 00:34:50.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.314 =================================================================================================================== 00:34:50.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@973 -- # wait 1783935 00:34:50.314 11:41:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1783707 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1783707 ']' 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1783707 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1783707 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1783707' 00:34:50.314 killing process with pid 1783707 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@968 -- # kill 1783707 00:34:50.314 11:41:47 keyring_linux -- common/autotest_common.sh@973 -- # wait 1783707 00:34:50.576 00:34:50.576 real 0m5.399s 00:34:50.576 user 0m9.955s 00:34:50.576 sys 0m1.463s 00:34:50.576 11:41:47 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:50.576 11:41:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:50.576 ************************************ 00:34:50.576 END TEST keyring_linux 00:34:50.576 ************************************ 00:34:50.576 11:41:47 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:50.576 11:41:47 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:50.576 11:41:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:50.576 11:41:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:50.576 11:41:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:50.576 11:41:47 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:34:50.576 11:41:47 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:34:50.576 11:41:47 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:50.576 11:41:47 -- common/autotest_common.sh@10 -- # set +x 00:34:50.576 11:41:47 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:34:50.576 11:41:47 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:34:50.576 11:41:47 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:34:50.576 11:41:47 -- common/autotest_common.sh@10 -- # set +x 00:34:58.723 INFO: APP EXITING 00:34:58.723 INFO: killing all VMs 00:34:58.723 INFO: killing vhost app 00:34:58.723 WARN: no vhost pid file found 00:34:58.723 INFO: EXIT DONE 00:35:01.273 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:01.273 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:35:01.534 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:01.534 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:05.738 Cleaning 00:35:05.738 Removing: /var/run/dpdk/spdk0/config 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:05.738 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:05.738 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:05.738 Removing: /var/run/dpdk/spdk1/config 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:05.738 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:05.738 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:05.738 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:05.738 Removing: /var/run/dpdk/spdk2/config 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:05.738 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:05.738 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:05.738 Removing: /var/run/dpdk/spdk3/config 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:05.738 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:05.738 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:05.738 Removing: /var/run/dpdk/spdk4/config 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:05.738 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:05.738 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:05.738 Removing: /dev/shm/bdev_svc_trace.1 00:35:05.738 Removing: /dev/shm/nvmf_trace.0 00:35:05.738 Removing: /dev/shm/spdk_tgt_trace.pid1338175 00:35:05.738 Removing: /var/run/dpdk/spdk0 00:35:05.738 Removing: /var/run/dpdk/spdk1 00:35:05.738 Removing: /var/run/dpdk/spdk2 00:35:05.738 Removing: /var/run/dpdk/spdk3 00:35:05.738 Removing: /var/run/dpdk/spdk4 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1334651 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1335922 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1338175 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1338664 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1339610 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1339919 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1340884 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1340928 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1341309 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1342969 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1344304 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1344659 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1345012 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1345387 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1345648 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1345802 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1346108 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1346450 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1347383 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1350342 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1350655 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1350950 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1351081 00:35:05.738 Removing: /var/run/dpdk/spdk_pid1351439 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1351727 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1352073 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1352203 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1352414 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1352718 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1352777 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1353060 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1353469 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1353786 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1353965 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1354205 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1354274 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1354575 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1354728 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1354934 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1355250 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1355541 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1355629 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1355928 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1356245 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1356560 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1356667 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1356920 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1357237 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1357556 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1357650 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1357912 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1358233 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1358550 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1358614 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1358917 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1359232 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1359518 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1359626 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1360007 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1364633 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1417280 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1422544 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1434154 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1440518 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1445382 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1446119 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1459760 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1459846 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1460637 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1461411 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1462309 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1462914 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1462917 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1463225 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1463248 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1463370 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1464171 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1465066 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1466091 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1466702 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1466712 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1467007 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1468764 00:35:05.999 Removing: /var/run/dpdk/spdk_pid1469776 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1479120 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1479446 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1484863 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1491566 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1494361 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1506547 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1517957 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1519776 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1520700 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1540827 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1545648 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1574736 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1580342 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1581884 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1583691 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1583867 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1584003 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1584021 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1584661 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1586393 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1587171 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1587522 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1589743 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1590330 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1590978 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1596132 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1608714 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1612943 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1619850 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1621224 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1622621 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1627820 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1632956 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1642310 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1642319 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1647802 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1648084 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1648381 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1648696 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1648719 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1655008 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1655529 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1661018 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1663586 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1670124 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1676955 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1686774 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1695151 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1695156 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1717545 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1718258 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1718786 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1719382 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1720171 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1720677 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1721318 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1721915 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1727055 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1727361 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1734310 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1734430 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1736705 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1745910 00:35:06.260 Removing: /var/run/dpdk/spdk_pid1745916 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1752839 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1754769 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1756850 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1758112 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1760109 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1761294 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1771813 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1772399 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1773009 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1775857 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1776224 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1776810 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1781362 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1781644 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1783303 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1783707 00:35:06.521 Removing: /var/run/dpdk/spdk_pid1783935 00:35:06.521 Clean 00:35:06.521 11:42:03 -- common/autotest_common.sh@1450 -- # return 0 00:35:06.521 11:42:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:35:06.521 11:42:03 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:06.521 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:35:06.521 11:42:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:35:06.521 11:42:03 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:06.521 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:35:06.521 11:42:03 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:06.521 11:42:03 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:06.521 11:42:03 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:06.521 11:42:03 -- spdk/autotest.sh@391 -- # hash lcov 00:35:06.521 11:42:03 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:06.521 11:42:03 -- spdk/autotest.sh@393 -- # hostname 00:35:06.521 11:42:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-CYP-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:06.782 geninfo: WARNING: invalid characters removed from testname! 00:35:33.354 11:42:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:33.354 11:42:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:35.894 11:42:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:37.802 11:42:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:39.714 11:42:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:42.251 11:42:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:44.170 11:42:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:44.170 11:42:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.170 11:42:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:44.170 11:42:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.170 11:42:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.170 11:42:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.170 11:42:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.170 11:42:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.170 11:42:41 -- paths/export.sh@5 -- $ export PATH 00:35:44.170 11:42:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.170 11:42:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:44.170 11:42:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:35:44.170 11:42:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718012561.XXXXXX 00:35:44.170 11:42:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718012561.6Ed5aG 00:35:44.170 11:42:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:35:44.170 11:42:41 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:35:44.170 11:42:41 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:44.170 11:42:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:44.170 11:42:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:44.170 11:42:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:35:44.170 11:42:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:44.170 11:42:41 -- common/autotest_common.sh@10 -- $ set +x 00:35:44.170 11:42:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:44.170 11:42:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:35:44.170 11:42:41 -- pm/common@17 -- $ local monitor 00:35:44.170 11:42:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:44.170 11:42:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:44.170 11:42:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:44.170 11:42:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:44.170 11:42:41 -- pm/common@21 -- $ date +%s 00:35:44.170 11:42:41 -- pm/common@21 -- $ date +%s 00:35:44.170 11:42:41 -- pm/common@25 -- $ sleep 1 00:35:44.170 11:42:41 -- pm/common@21 -- $ date +%s 00:35:44.170 11:42:41 -- pm/common@21 -- $ date +%s 00:35:44.170 11:42:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012561 00:35:44.170 11:42:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012561 00:35:44.170 11:42:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012561 00:35:44.170 11:42:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012561 00:35:44.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012561_collect-vmstat.pm.log 00:35:44.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012561_collect-cpu-load.pm.log 00:35:44.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012561_collect-cpu-temp.pm.log 00:35:44.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012561_collect-bmc-pm.bmc.pm.log 00:35:45.123 11:42:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:35:45.123 11:42:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:35:45.123 11:42:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:45.123 11:42:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:45.123 11:42:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:45.123 11:42:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:45.123 11:42:42 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:45.123 11:42:42 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:45.123 11:42:42 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:45.123 11:42:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:45.123 11:42:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:45.123 11:42:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:45.123 11:42:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:45.123 11:42:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:45.124 11:42:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:45.124 11:42:42 -- pm/common@44 -- $ pid=1796543 00:35:45.124 11:42:42 -- pm/common@50 -- $ kill -TERM 1796543 00:35:45.124 11:42:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:45.124 11:42:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:45.124 11:42:42 -- pm/common@44 -- $ pid=1796544 00:35:45.124 11:42:42 -- pm/common@50 -- $ kill -TERM 1796544 00:35:45.124 11:42:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:45.124 11:42:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:45.124 11:42:42 -- pm/common@44 -- $ pid=1796546 00:35:45.124 11:42:42 -- pm/common@50 -- $ kill -TERM 1796546 00:35:45.124 11:42:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:45.124 11:42:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:45.124 11:42:42 -- pm/common@44 -- $ pid=1796570 00:35:45.124 11:42:42 -- pm/common@50 -- $ sudo -E kill -TERM 1796570 00:35:45.124 + [[ -n 1217464 ]] 00:35:45.124 + sudo kill 1217464 00:35:45.163 [Pipeline] } 00:35:45.174 [Pipeline] // stage 00:35:45.176 [Pipeline] } 00:35:45.185 [Pipeline] // timeout 00:35:45.188 [Pipeline] } 00:35:45.199 [Pipeline] // catchError 00:35:45.202 [Pipeline] } 00:35:45.212 [Pipeline] // wrap 00:35:45.216 [Pipeline] } 00:35:45.226 [Pipeline] // catchError 00:35:45.232 [Pipeline] stage 00:35:45.233 [Pipeline] { (Epilogue) 00:35:45.243 [Pipeline] catchError 00:35:45.244 [Pipeline] { 00:35:45.254 [Pipeline] echo 00:35:45.254 Cleanup processes 00:35:45.258 [Pipeline] sh 00:35:45.539 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:45.539 1796660 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:45.539 1797044 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:45.553 [Pipeline] sh 00:35:45.839 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:45.839 ++ grep -v 'sudo pgrep' 00:35:45.839 ++ awk '{print $1}' 00:35:45.839 + sudo kill -9 1796660 00:35:45.851 [Pipeline] sh 00:35:46.137 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:58.423 [Pipeline] sh 00:35:58.790 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:58.790 Artifacts sizes are good 00:35:58.806 [Pipeline] archiveArtifacts 00:35:58.813 Archiving artifacts 00:35:59.000 [Pipeline] sh 00:35:59.287 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:59.303 [Pipeline] cleanWs 00:35:59.313 [WS-CLEANUP] Deleting project workspace... 00:35:59.313 [WS-CLEANUP] Deferred wipeout is used... 00:35:59.321 [WS-CLEANUP] done 00:35:59.323 [Pipeline] } 00:35:59.344 [Pipeline] // catchError 00:35:59.358 [Pipeline] sh 00:35:59.644 + logger -p user.info -t JENKINS-CI 00:35:59.654 [Pipeline] } 00:35:59.671 [Pipeline] // stage 00:35:59.677 [Pipeline] } 00:35:59.695 [Pipeline] // node 00:35:59.702 [Pipeline] End of Pipeline 00:35:59.749 Finished: SUCCESS